- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- Subspace estimation in linear dimension reduction
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
Subspace estimation in linear dimension reduction Oja, Hannu
Description
In linear dimension reduction for a p-variate random vector x, the general idea is to find an orthogonal projection (matrix) P of rank k, k < p such that Px carries all or most of the information. In unsupervised dimension reduction this means that x|Px just presents (uninteresting) noise. In supervised dimension reduction for an interesting response variable y, x and y are conditionally independent, given Px, that is, the dependence of x on y is only through Px. In this talk we consider the problem of estimating the minimal subspace, that is, the corresponding unknown projection P with known or unknown dimension. Most of the linear (supervised and unsu- pervised) dimension reduction methods such as principal component analysis (PCA), fourth order blind identification (FOBI), Fisher’s linear discrimination subspace or sliced inverse regression (SIR) are based on a simultaneous diagonalization of two matrices S1 and S2. Asymptotic and robustness properties of the estimates of P can then be derived from those of the estimates of S1 and S2. We also discuss the tools for robustness studies as well as the possibility to robustify these approaches by replacing these matrices by their robust counterparts. The talk is based on the co-operation with several people.
Item Metadata
Title |
Subspace estimation in linear dimension reduction
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2015-11-19T19:34
|
Description |
In linear dimension reduction for a p-variate random vector x, the general idea is to find an orthogonal projection (matrix) P of rank k, k < p such that Px carries all or most of the information. In unsupervised dimension reduction this means that x|Px just presents (uninteresting) noise. In supervised dimension reduction for an interesting response variable y, x and y are conditionally independent, given Px, that is, the dependence of x on y is only through Px.
In this talk we consider the problem of estimating the minimal subspace, that is, the corresponding unknown projection P with known or unknown dimension. Most of the linear (supervised and unsu- pervised) dimension reduction methods such as principal component analysis (PCA), fourth order blind identification (FOBI), Fisher’s linear discrimination subspace or sliced inverse regression (SIR) are based on a simultaneous diagonalization of two matrices S1 and S2. Asymptotic and robustness properties of the estimates of P can then be derived from those of the estimates of S1 and S2. We also discuss the tools for robustness studies as well as the possibility to robustify these approaches by replacing these matrices by their robust counterparts. The talk is based on the co-operation with several people.
|
Extent |
48 minutes
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: University of Turku
|
Series | |
Date Available |
2016-05-20
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0303125
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Researcher
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International