scholarly journals An Evaluation of Supervised Dimensionality Reduction For Large Scale Data

2022 ◽  
pp. 17-25
Author(s):  
Nancy Jan Sliper

Experimenters today frequently quantify millions or even billions of characteristics (measurements) each sample to address critical biological issues, in the hopes that machine learning tools would be able to make correct data-driven judgments. An efficient analysis requires a low-dimensional representation that preserves the differentiating features in data whose size and complexity are orders of magnitude apart (e.g., if a certain ailment is present in the person's body). While there are several systems that can handle millions of variables and yet have strong empirical and conceptual guarantees, there are few that can be clearly understood. This research presents an evaluation of supervised dimensionality reduction for large scale data. We provide a methodology for expanding Principal Component Analysis (PCA) by including category moment estimations in low-dimensional projections. Linear Optimum Low-Rank (LOLR) projection, the cheapest variant, includes the class-conditional means. We show that LOLR projections and its extensions enhance representations of data for future classifications while retaining computing flexibility and reliability using both experimental and simulated data benchmark. When it comes to accuracy, LOLR prediction outperforms other modular linear dimension reduction methods that require much longer computation times on conventional computers. LOLR uses more than 150 million attributes in brain image processing datasets, and many genome sequencing datasets have more than half a million attributes.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Joshua T. Vogelstein ◽  
Eric W. Bridgeford ◽  
Minh Tang ◽  
Da Zheng ◽  
Christopher Douville ◽  
...  

AbstractTo solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences. Because sample sizes are typically orders of magnitude smaller than the dimensionality of these data, valid inferences require finding a low-dimensional representation that preserves the discriminating information (e.g., whether the individual suffers from a particular disease). There is a lack of interpretable supervised dimensionality reduction methods that scale to millions of dimensions with strong statistical theoretical guarantees. We introduce an approach to extending principal components analysis by incorporating class-conditional moment estimates into the low-dimensional projection. The simplest version, Linear Optimal Low-rank projection, incorporates the class-conditional means. We prove, and substantiate with both synthetic and real data benchmarks, that Linear Optimal Low-Rank Projection and its generalizations lead to improved data representations for subsequent classification, while maintaining computational efficiency and scalability. Using multiple brain imaging datasets consisting of more than 150 million features, and several genomics datasets with more than 500,000 features, Linear Optimal Low-Rank Projection outperforms other scalable linear dimensionality reduction techniques in terms of accuracy, while only requiring a few minutes on a standard desktop computer.


2014 ◽  
Vol 26 (4) ◽  
pp. 761-780 ◽  
Author(s):  
Guoqiang Zhong ◽  
Mohamed Cheriet

We present a supervised model for tensor dimensionality reduction, which is called large margin low rank tensor analysis (LMLRTA). In contrast to traditional vector representation-based dimensionality reduction methods, LMLRTA can take any order of tensors as input. And unlike previous tensor dimensionality reduction methods, which can learn only the low-dimensional embeddings with a priori specified dimensionality, LMLRTA can automatically and jointly learn the dimensionality and the low-dimensional representations from data. Moreover, LMLRTA delivers low rank projection matrices, while it encourages data of the same class to be close and of different classes to be separated by a large margin of distance in the low-dimensional tensor space. LMLRTA can be optimized using an iterative fixed-point continuation algorithm, which is guaranteed to converge to a local optimal solution of the optimization problem. We evaluate LMLRTA on an object recognition application, where the data are represented as 2D tensors, and a face recognition application, where the data are represented as 3D tensors. Experimental results show the superiority of LMLRTA over state-of-the-art approaches.


2012 ◽  
Vol 60 (3) ◽  
pp. 389-405 ◽  
Author(s):  
G. Zhou ◽  
A. Cichocki

Abstract A multiway blind source separation (MBSS) method is developed to decompose large-scale tensor (multiway array) data. Benefitting from all kinds of well-established constrained low-rank matrix factorization methods, MBSS is quite flexible and able to extract unique and interpretable components with physical meaning. The multilinear structure of Tucker and the essential uniqueness of BSS methods allow MBSS to estimate each component matrix separately from an unfolding matrix in each mode. Consequently, alternating least squares (ALS) iterations, which are considered as the workhorse for tensor decompositions, can be avoided and various robust and efficient dimensionality reduction methods can be easily incorporated to pre-process the data, which makes MBSS extremely fast, especially for large-scale problems. Identification and uniqueness conditions are also discussed. Two practical issues dimensionality reduction and estimation of number of components are also addressed based on sparse and random fibers sampling. Extensive simulations confirmed the validity, flexibility, and high efficiency of the proposed method. We also demonstrated by simulations that the MBSS approach can successfully extract desired components while most existing algorithms may fail for ill-conditioned and large-scale problems.


Sign in / Sign up

Export Citation Format

Share Document