Local quasi-linear embedding based on kronecker product expansion of vectors

2021 ◽  
pp. 1-11
Author(s):  
Guo Niu ◽  
Zhengming Ma

Locally Linear Embedding (LLE) is honored as the first algorithm of manifold learning. Generally speaking, the relation between a data and its nearest neighbors is nonlinear and LLE only extracts its linear part. Therefore, local nonlinear embedding is an important direction of improvement to LLE. However, any attempt in this direction may lead to a significant increase in computational complexity. In this paper, a novel algorithm called local quasi-linear embedding (LQLE) is proposed. In our LQLE, each high-dimensional data vector is first expanded by using Kronecker product. The expanded vector contains not only the components of the original vector, but also the polynomials of its components. Then, each expanded vector of high dimensional data is linearly approximated with the expanded vectors of its nearest neighbors. In this way, the proposed LQLE achieves a certain degree of local nonlinearity and learns the data dimensionality reduction results under the principle of keeping local nonlinearity unchanged. More importantly, LQLE does not increase computation complexity by only replacing the data vectors with their Kronecker product expansions in the original LLE program. Experimental results between our proposed methods and four comparison algorithms on various datasets demonstrate the well performance of the proposed methods.

Author(s):  
JING CHEN ◽  
ZHENGMING MA

The goal of nonlinear dimensionality reduction is to find the meaningful low dimensional structure of the nonlinear manifold from the high dimensional data. As a classic method of nonlinear dimensional reduction, locally linear embedding (LLE) is more and more attractive to researchers due to its ability to deal with large amounts of high dimensional data and its noniterative way of finding the embeddings. However, several problems in the LLE algorithm still remain open, such as its sensitivity to noise, inevitable ill-conditioned eigenproblems, the inability to deal with the novel data, etc. The existing extensions are comprehensively reviewed and discussed classifying into different categories in this paper. Their strategies, advantages/disadvantages and performances are elaborated. By generalizing different tactics in various extensions related to different stages of LLE and evaluating their performances, several promising directions for future research have been suggested.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 139512-139528
Author(s):  
Shuangjie Li ◽  
Kaixiang Zhang ◽  
Qianru Chen ◽  
Shuqin Wang ◽  
Shaoqiang Zhang

2010 ◽  
Vol 139-141 ◽  
pp. 2599-2602
Author(s):  
Zheng Wei Li ◽  
Ru Nie ◽  
Yao Fei Han

Fault diagnosis is a kind of pattern recognition problem and how to extract diagnosis features and improve recognition performance is a difficult problem. Local Linear Embedding (LLE) is an unsupervised non-linear technique that extracts useful features from the high-dimensional data sets with preserved local topology. But the original LLE method is not taking the known class label information of input data into account. A new characteristics similarity-based supervised locally linear embedding (CSSLLE) method for fault diagnosis is proposed in this paper. The CSSLLE method attempts to extract the intrinsic manifold features from high-dimensional fault data by computing Euclidean distance based on characteristics similarity and translate complex mode space into a low-dimensional feature space in which fault classification and diagnosis are carried out easily. The experiments on benchmark data and real fault dataset demonstrate that the proposed approach obtains better performance compared to SLLE, and it is an accurate technique for fault diagnosis.


Author(s):  
Yuan Li ◽  
Chengcheng Feng

Aiming at fault detection in industrial processes with nonlinear or high dimensions, a novel method based on locally linear embedding preserve neighborhood for fault detection is proposed in this paper. Locally linear embedding preserve neighborhood is a feature-mapping method that combines Locally linear embedding and Laplacian eigenmaps algorithms. First, two weight matrices are obtained by the Locally linear embedding and Laplacian eigenmaps, respectively. Subsequently, the two weight matrices are combined by a balance factor to obtain the objective function. Locally linear embedding preserve neighborhood method can effectively maintain the characteristics of data in high-dimensional space. The purpose of dimension reduction is to map the high-dimensional data to low-dimensional space by optimizing the objective function. Process monitoring is performed by constructing T2 and Q statistics. To demonstrate its effectiveness and superiority, the proposed locally linear embedding preserve neighborhood for fault detection method is tested under the Swiss Roll dataset and an industrial case study. Compared with traditional fault detection methods, the proposed method in this paper effectively improves the detection rate and reduces the false alarm rate.


2014 ◽  
Vol 1014 ◽  
pp. 375-378 ◽  
Author(s):  
Ri Sheng Huang

To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. This paper proposes an improved SLLE algorithm, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody so as to recognize three emotions including anger, joy and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.97% with only less 9 embedded features, making 11.64% improvement over SLLE algorithm.


Sign in / Sign up

Export Citation Format

Share Document