LOCALLY LINEAR EMBEDDING: A REVIEW

Author(s):  
JING CHEN ◽  
ZHENGMING MA

The goal of nonlinear dimensionality reduction is to find the meaningful low dimensional structure of the nonlinear manifold from the high dimensional data. As a classic method of nonlinear dimensional reduction, locally linear embedding (LLE) is more and more attractive to researchers due to its ability to deal with large amounts of high dimensional data and its noniterative way of finding the embeddings. However, several problems in the LLE algorithm still remain open, such as its sensitivity to noise, inevitable ill-conditioned eigenproblems, the inability to deal with the novel data, etc. The existing extensions are comprehensively reviewed and discussed classifying into different categories in this paper. Their strategies, advantages/disadvantages and performances are elaborated. By generalizing different tactics in various extensions related to different stages of LLE and evaluating their performances, several promising directions for future research have been suggested.

2014 ◽  
Vol 1014 ◽  
pp. 375-378 ◽  
Author(s):  
Ri Sheng Huang

To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. This paper proposes an improved SLLE algorithm, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody so as to recognize three emotions including anger, joy and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.97% with only less 9 embedded features, making 11.64% improvement over SLLE algorithm.


2010 ◽  
Vol 139-141 ◽  
pp. 2599-2602
Author(s):  
Zheng Wei Li ◽  
Ru Nie ◽  
Yao Fei Han

Fault diagnosis is a kind of pattern recognition problem and how to extract diagnosis features and improve recognition performance is a difficult problem. Local Linear Embedding (LLE) is an unsupervised non-linear technique that extracts useful features from the high-dimensional data sets with preserved local topology. But the original LLE method is not taking the known class label information of input data into account. A new characteristics similarity-based supervised locally linear embedding (CSSLLE) method for fault diagnosis is proposed in this paper. The CSSLLE method attempts to extract the intrinsic manifold features from high-dimensional fault data by computing Euclidean distance based on characteristics similarity and translate complex mode space into a low-dimensional feature space in which fault classification and diagnosis are carried out easily. The experiments on benchmark data and real fault dataset demonstrate that the proposed approach obtains better performance compared to SLLE, and it is an accurate technique for fault diagnosis.


2014 ◽  
Vol 644-650 ◽  
pp. 2160-2163 ◽  
Author(s):  
Shi Min Liu ◽  
Yan Ni Deng ◽  
Yuan Xing Lv

Locally linear embedding algorithm (LLE) , It makes up the shortcomings that the manifold learning algorithm can be only applied to training samples but not be extended to test samples . However, due to the presence of its Low-dimensional feature space redundant information,and its sample category information does not integrate into a low-dimensional embedding. For this shortcoming, here we introduce the two improved algorithms:the local linear maximum dispersion matrix algorithm (FSLLE) and the adaptive algorithm (ALLE), and the combinations of the above two algorithms.With this experience,combined Garbol and locally linear embedding algorithm (LLE) to compare each conclusion. The results proved to be effective elimination of redundant information among basis vectors and improve the recognition rate.


Author(s):  
Yuan Li ◽  
Chengcheng Feng

Aiming at fault detection in industrial processes with nonlinear or high dimensions, a novel method based on locally linear embedding preserve neighborhood for fault detection is proposed in this paper. Locally linear embedding preserve neighborhood is a feature-mapping method that combines Locally linear embedding and Laplacian eigenmaps algorithms. First, two weight matrices are obtained by the Locally linear embedding and Laplacian eigenmaps, respectively. Subsequently, the two weight matrices are combined by a balance factor to obtain the objective function. Locally linear embedding preserve neighborhood method can effectively maintain the characteristics of data in high-dimensional space. The purpose of dimension reduction is to map the high-dimensional data to low-dimensional space by optimizing the objective function. Process monitoring is performed by constructing T2 and Q statistics. To demonstrate its effectiveness and superiority, the proposed locally linear embedding preserve neighborhood for fault detection method is tested under the Swiss Roll dataset and an industrial case study. Compared with traditional fault detection methods, the proposed method in this paper effectively improves the detection rate and reduces the false alarm rate.


2014 ◽  
Vol 536-537 ◽  
pp. 49-52
Author(s):  
Xiang Wang ◽  
Yuan Zheng

Fault diagnosis is essentially a kind of pattern recognition. In this paper propose a novel machinery fault diagnosis method based on supervised locally linear embedding is proposed first. The approach first performs the recently proposed manifold learning algorithm locally linear embedding on the high-dimensional fault signal samples to learn the intrinsic embedded multiple manifold features corresponding to different fault modes. Supervised locally linear embedding not only can map them into a low-dimensional embedded space to achieve fault feature extraction, but also can deal with new fault samples. Finally fault classification is carried out in the embedded manifold space. The ball bearing fault signals are used to validate the proposed fault diagnosis method. The results indicate that the proposed approach obviously improves the fault classification performance and outperforms the other traditional approaches.


2017 ◽  
Author(s):  
Genevieve L. Stein-O’Brien ◽  
Raman Arora ◽  
Aedin C. Culhane ◽  
Alexander V. Favorov ◽  
Lana X. Garmire ◽  
...  

AbstractOmics data contains signal from the molecular, physical, and kinetic inter- and intra-cellular interactions that control biological systems. Matrix factorization techniques can reveal low-dimensional structure from high-dimensional data that reflect these interactions. These techniques can uncover new biological knowledge from diverse high-throughput omics data in topics ranging from pathway discovery to time course analysis. We review exemplary applications of matrix factorization for systems-level analyses. We discuss appropriate application of these methods, their limitations, and focus on analysis of results to facilitate optimal biological interpretation. The inference of biologically relevant features with matrix factorization enables discovery from high-throughput data beyond the limits of current biological knowledge—answering questions from high-dimensional data that we have not yet thought to ask.


Sign in / Sign up

Export Citation Format

Share Document