scholarly journals Learning Neural Representations and Local Embedding for Nonlinear Dimensionality Reduction Mapping

Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1017
Author(s):  
Sheng-Shiung Wu ◽  
Sing-Jie Jong ◽  
Kai Hu ◽  
Jiann-Ming Wu

This work explores neural approximation for nonlinear dimensionality reduction mapping based on internal representations of graph-organized regular data supports. Given training observations are assumed as a sample from a high-dimensional space with an embedding low-dimensional manifold. An approximating function consisting of adaptable built-in parameters is optimized subject to given training observations by the proposed learning process, and verified for transformation of novel testing observations to images in the low-dimensional output space. Optimized internal representations sketch graph-organized supports of distributed data clusters and their representative images in the output space. On the basis, the approximating function is able to operate for testing without reserving original massive training observations. The neural approximating model contains multiple modules. Each activates a non-zero output for mapping in response to an input inside its correspondent local support. Graph-organized data supports have lateral interconnections for representing neighboring relations, inferring the minimal path between centroids of any two data supports, and proposing distance constraints for mapping all centroids to images in the output space. Following the distance-preserving principle, this work proposes Levenberg-Marquardt learning for optimizing images of centroids in the output space subject to given distance constraints, and further develops local embedding constraints for mapping during execution phase. Numerical simulations show the proposed neural approximation effective and reliable for nonlinear dimensionality reduction mapping.

2020 ◽  
Author(s):  
Alberto García-González ◽  
Antonio Huerta ◽  
Sergio Zlotnik ◽  
Pedro Díez

Abstract Methodologies for multidimensionality reduction aim at discovering low-dimensional manifolds where data ranges. Principal Component Analysis (PCA) is very effective if data have linear structure. But fails in identifying a possible dimensionality reduction if data belong to a nonlinear low-dimensional manifold. For nonlinear dimensionality reduction, kernel Principal Component Analysis (kPCA) is appreciated because of its simplicity and ease implementation. The paper provides a concise review of PCA and kPCA main ideas, trying to collect in a single document aspects that are often dispersed. Moreover, a strategy to map back the reduced dimension into the original high dimensional space is also devised, based on the minimization of a discrepancy functional.


2003 ◽  
Vol 15 (6) ◽  
pp. 1373-1396 ◽  
Author(s):  
Mikhail Belkin ◽  
Partha Niyogi

One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4454 ◽  
Author(s):  
Marek Piorecky ◽  
Vlastimil Koudelka ◽  
Jan Strobl ◽  
Martin Brunovsky ◽  
Vladimir Krajca

Simultaneous recordings of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) are at the forefront of technologies of interest to physicians and scientists because they combine the benefits of both modalities—better time resolution (hdEEG) and space resolution (fMRI). However, EEG measurements in the scanner contain an electromagnetic field that is induced in leads as a result of gradient switching slight head movements and vibrations, and it is corrupted by changes in the measured potential because of the Hall phenomenon. The aim of this study is to design and test a methodology for inspecting hidden EEG structures with respect to artifacts. We propose a top-down strategy to obtain additional information that is not visible in a single recording. The time-domain independent component analysis algorithm was employed to obtain independent components and spatial weights. A nonlinear dimension reduction technique t-distributed stochastic neighbor embedding was used to create low-dimensional space, which was then partitioned using the density-based spatial clustering of applications with noise (DBSCAN). The relationships between the found data structure and the used criteria were investigated. As a result, we were able to extract information from the data structure regarding electrooculographic, electrocardiographic, electromyographic and gradient artifacts. This new methodology could facilitate the identification of artifacts and their residues from simultaneous EEG in fMRI.


Author(s):  
Akira Imakura ◽  
Momo Matsuda ◽  
Xiucai Ye ◽  
Tetsuya Sakurai

Dimensionality reduction methods that project highdimensional data to a low-dimensional space by matrix trace optimization are widely used for clustering and classification. The matrix trace optimization problem leads to an eigenvalue problem for a low-dimensional subspace construction, preserving certain properties of the original data. However, most of the existing methods use only a few eigenvectors to construct the low-dimensional space, which may lead to a loss of useful information for achieving successful classification. Herein, to overcome the deficiency of the information loss, we propose a novel complex moment-based supervised eigenmap including multiple eigenvectors for dimensionality reduction. Furthermore, the proposed method provides a general formulation for matrix trace optimization methods to incorporate with ridge regression, which models the linear dependency between covariate variables and univariate labels. To reduce the computational complexity, we also propose an efficient and parallel implementation of the proposed method. Numerical experiments indicate that the proposed method is competitive compared with the existing dimensionality reduction methods for the recognition performance. Additionally, the proposed method exhibits high parallel efficiency.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jianping Zhao ◽  
Na Wang ◽  
Haiyun Wang ◽  
Chunhou Zheng ◽  
Yansen Su

Dimensionality reduction of high-dimensional data is crucial for single-cell RNA sequencing (scRNA-seq) visualization and clustering. One prominent challenge in scRNA-seq studies comes from the dropout events, which lead to zero-inflated data. To address this issue, in this paper, we propose a scRNA-seq data dimensionality reduction algorithm based on a hierarchical autoencoder, termed SCDRHA. The proposed SCDRHA consists of two core modules, where the first module is a deep count autoencoder (DCA) that is used to denoise data, and the second module is a graph autoencoder that projects the data into a low-dimensional space. Experimental results demonstrate that SCDRHA has better performance than existing state-of-the-art algorithms on dimension reduction and noise reduction in five real scRNA-seq datasets. Besides, SCDRHA can also dramatically improve the performance of data visualization and cell clustering.


Author(s):  
Stephanie Hare ◽  
Lars Bratholm ◽  
David Glowacki ◽  
Barry Carpenter

Low dimensional representations along reaction pathways were produced using newly created Python software that utilises Principal Component Analysis (PCA) to do dimensionality reduction. Plots of these pathways in reduced dimensional space, as well as the physical meaning of the reduced dimensional axes, are discussed.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yongbin Liu ◽  
Jingjie Wang ◽  
Wei Bai

Dimensionality reduction of images with high-dimensional nonlinear structure is the key to improving the recognition rate. Although some traditional algorithms have achieved some results in the process of dimensionality reduction, they also expose their respective defects. In order to achieve the ideal effect of high-dimensional nonlinear image recognition, based on the analysis of the traditional dimensionality reduction algorithm and refining its advantages, an image recognition technology based on the nonlinear dimensionality reduction method is proposed. As an effective nonlinear feature extraction method, the nonlinear dimensionality reduction method can find the nonlinear structure of datasets and maintain the intrinsic structure of data. Applying the nonlinear dimensionality reduction method to image recognition is to divide the input image into blocks, take it as a dataset in high-dimensional space, reduce the dimension of its structure, and obtain the low-dimensional expression vector of its eigenstructure so that the problem of image recognition can be carried out in a lower dimension. Thus, the computational complexity can be reduced, the recognition accuracy can be improved, and it is convenient for further processing such as image recognition and search. The defects of traditional algorithms are solved, and the commodity price recognition and simulation experiments are carried out, which verifies the feasibility of image recognition technology based on the nonlinear dimensionality reduction method in commodity price recognition.


Author(s):  
Michael Elmegaard ◽  
Jan Ru¨bel ◽  
Mizuho Inagaki ◽  
Atsushi Kawamoto ◽  
Jens Starke

Mechanical systems are typically described with finite element models resulting in high-dimensional dynamical systems. The high-dimensional space excludes the application of certain investigation methods like numerical continuation and bifurcation analysis to investigate the dynamical behaviour and its parameter dependence. Nevertheless, the dynamical behaviour usually lives on a low-dimensional manifold but typically no closed equations are available for the macroscopic quantities of interest. Therefore, an equation-free approach is suggested here to analyse and investigate the vibration behaviour of nonlinear rotating machinery. This allows then in the next step to optimize the rotor design specifications to reduce unbalance vibrations of a rotor-bearing system with nonlinear factors like the oil film dynamics. As an example we provide a simple model of a passenger car turbocharger where we investigate how the maximal vibration amplitude of the rotor depends on the viscosity of the oil used in the bearings.


Author(s):  
MIAO CHENG ◽  
BIN FANG ◽  
YUAN YAN TANG ◽  
HENGXIN CHEN

Many problems in pattern classification and feature extraction involve dimensionality reduction as a necessary processing. Traditional manifold learning algorithms, such as ISOMAP, LLE, and Laplacian Eigenmap, seek the low-dimensional manifold in an unsupervised way, while the local discriminant analysis methods identify the underlying supervised submanifold structures. In addition, it has been well-known that the intraclass null subspace contains the most discriminative information if the original data exist in a high-dimensional space. In this paper, we seek for the local null space in accordance with the null space LDA (NLDA) approach and reveal that its computational expense mainly depends on the quantity of connected edges in graphs, which may be still unacceptable if a great deal of samples are involved. To address this limitation, an improved local null space algorithm is proposed to employ the penalty subspace to approximate the local discriminant subspace. Compared with the traditional approach, the proposed method can achieve more efficiency so that the overload problem is avoided, while slight discriminant power is lost theoretically. A comparative study on classification shows that the performance of the approximative algorithm is quite close to the genuine one.


2014 ◽  
Vol 1014 ◽  
pp. 375-378 ◽  
Author(s):  
Ri Sheng Huang

To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. This paper proposes an improved SLLE algorithm, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody so as to recognize three emotions including anger, joy and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.97% with only less 9 embedded features, making 11.64% improvement over SLLE algorithm.


Sign in / Sign up

Export Citation Format

Share Document