Subspace learning via low rank projections for dimensionality reduction

Author(s):  
Devansh Arpit ◽  
Chetan Ramaiah ◽  
Venu Govindaraju
2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Joshua T. Vogelstein ◽  
Eric W. Bridgeford ◽  
Minh Tang ◽  
Da Zheng ◽  
Christopher Douville ◽  
...  

AbstractTo solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences. Because sample sizes are typically orders of magnitude smaller than the dimensionality of these data, valid inferences require finding a low-dimensional representation that preserves the discriminating information (e.g., whether the individual suffers from a particular disease). There is a lack of interpretable supervised dimensionality reduction methods that scale to millions of dimensions with strong statistical theoretical guarantees. We introduce an approach to extending principal components analysis by incorporating class-conditional moment estimates into the low-dimensional projection. The simplest version, Linear Optimal Low-rank projection, incorporates the class-conditional means. We prove, and substantiate with both synthetic and real data benchmarks, that Linear Optimal Low-Rank Projection and its generalizations lead to improved data representations for subsequent classification, while maintaining computational efficiency and scalability. Using multiple brain imaging datasets consisting of more than 150 million features, and several genomics datasets with more than 500,000 features, Linear Optimal Low-Rank Projection outperforms other scalable linear dimensionality reduction techniques in terms of accuracy, while only requiring a few minutes on a standard desktop computer.


Author(s):  
Kewei Tang ◽  
Xiaodong Liu ◽  
Zhixun Su ◽  
Wei Jiang ◽  
Jiangxin Dong

Energy ◽  
2021 ◽  
pp. 122680
Author(s):  
Fahad Iqbal Syed ◽  
Temoor Muther ◽  
Amirmasoud Kalantari Dahaghi ◽  
Shahin Neghabhan

Author(s):  
Olga Mendoza-Schrock ◽  
Mateen M. Rizki ◽  
Vincent J. Velten

This article describes how transfer subspace learning has recently gained popularity for its ability to perform cross-dataset and cross-domain object recognition. The ability to leverage existing data without the need for additional data collections is attractive for monitoring and surveillance technology, specifically for aided target recognition applications. Transfer subspace learning enables the incorporation of sparse and dynamically collected data into existing systems that utilize large databases. Manifold learning has also gained popularity for its success at dimensionality reduction. In this contribution, Manifold learning and transfer subspace learning are combined to create a new system capable of achieving high target recognition rates. The manifold learning technique used in this contribution is diffusion maps, a nonlinear dimensionality reduction technique based on a heat diffusion analogy. The transfer subspace learning technique used is Transfer Fisher's Linear Discriminative Analysis. The new system, manifold transfer subspace learning, sequentially integrates manifold learning and transfer subspace learning. In this article, the ability of the new techniques to achieve high target recognition rates for cross-dataset and cross-domain applications is illustrated using a variety of diverse datasets.


Sign in / Sign up

Export Citation Format

Share Document