scholarly journals Applying Ricci flow to high dimensional manifold learning

2019 ◽  
Vol 62 (9) ◽  
Author(s):  
Yangyang Li ◽  
Ruqian Lu
2012 ◽  
Vol 263-266 ◽  
pp. 2126-2130 ◽  
Author(s):  
Zhi Gang Lou ◽  
Hong Zhao Liu

Manifold learning is a new unsupervised learning method. Its main purpose is to find the inherent law of generated data sets. Be used for high dimensional nonlinear fault samples for learning, in order to identify embedded in high dimensional data space in the low dimensional manifold, can be effective data found the essential characteristics of fault identification. In many types of fault, sometimes often failure and normal operation of the equipment of some operation similar to misjudgment, such as oil pipeline transportation process, pipeline regulating pump, adjustable valve, pump switch, normal operation and pipeline leakage fault condition similar spectral characteristics, thus easy for pipeline leakage cause mistakes. This paper uses the manifold learning algorithm for fault pattern clustering recognition, and through experiments on the algorithm is evaluated.


Author(s):  
Jin-Hang Liu ◽  
Tao Peng ◽  
Xiaogang Zhao ◽  
Kunfang Song ◽  
Minghua Jiang ◽  
...  

Data in a high-dimensional data space may reside in a low-dimensional manifold embedded within the high-dimensional space. Manifold learning discovers intrinsic manifold data structures to facilitate dimensionality reductions. We propose a novel manifold learning technique called fast [Formula: see text] selection for locally linear embedding or FSLLE, which judiciously chooses an appropriate number (i.e., parameter [Formula: see text]) of neighboring points where the local geometric properties are maintained by the locally linear embedding (LLE) criterion. To measure the spatial distribution of a group of neighboring points, FSLLE relies on relative variance and mean difference to form a spatial correlation index characterizing the neighbors’ data distribution. The goal of FSLLE is to quickly identify the optimal value of parameter [Formula: see text], which aims at minimizing the spatial correlation index. FSLLE optimizes parameter [Formula: see text] by making use of the spatial correlation index to discover intrinsic structures of a data point’s neighbors. After implementing FSLLE, we conduct extensive experiments to validate the correctness and evaluate the performance of FSLLE. Our experimental results show that FSLLE outperforms the existing solutions (i.e., LLE and ISOMAP) in manifold learning and dimension reduction. We apply FSLLE to face recognition in which FSLLE achieves higher accuracy than the state-of-the-art face recognition algorithms. FSLLE is superior to the face recognition algorithms, because FSLLE makes a good tradeoff between classification precision and performance.


Author(s):  
Muhammad Amjad

Advances in manifold learning have proven to be of great benefit in reducing the dimensionality of large complex datasets. Elements in an intricate dataset will typically belong in high-dimensional space as the number of individual features or independent variables will be extensive. However, these elements can be integrated into a low-dimensional manifold with well-defined parameters. By constructing a low-dimensional manifold and embedding it into high-dimensional feature space, the dataset can be simplified for easier interpretation. In spite of this elemental dimensionality reduction, the dataset’s constituents do not lose any information, but rather filter it with the hopes of elucidating the appropriate knowledge. This paper will explore the importance of this method of data analysis, its applications, and its extensions into topological data analysis.


Author(s):  
Ziquan Liu ◽  
Lei Yu ◽  
Janet H. Hsiao ◽  
Antoni B. Chan

The Gaussian Mixture Model (GMM) is among the most widely used parametric probability distributions for representing data. However, it is complicated to analyze the relationship among GMMs since they lie on a high-dimensional manifold. Previous works either perform clustering of GMMs, which learns a limited discrete latent representation, or kernel-based embedding of GMMs, which is not interpretable due to difficulty in computing the inverse mapping. In this paper, we propose Parametric Manifold Learning of GMMs (PML-GMM), which learns a parametric mapping from a low-dimensional latent space to a high-dimensional GMM manifold. Similar to PCA, the proposed mapping is parameterized by the principal axes for the component weights, means, and covariances, which are optimized to minimize the reconstruction loss measured using Kullback-Leibler divergence (KLD). As the KLD between two GMMs is intractable, we approximate the objective function by a variational upper bound, which is optimized by an EM-style algorithm. Moreover, We derive an efficient solver by alternating optimization of subproblems and exploit Monte Carlo sampling to escape from local minima. We demonstrate the effectiveness of PML-GMM through experiments on synthetic, eye-fixation, flow cytometry, and social check-in data.


2020 ◽  
Vol 12 (4) ◽  
pp. 655
Author(s):  
Chu He ◽  
Mingxia Tu ◽  
Dehui Xiong ◽  
Mingsheng Liao

Synthetic Aperture Rradar (SAR) provides rich ground information for remote sensing survey and can be used all time and in all weather conditions. Polarimetric SAR (PolSAR) can further reveal surface scattering difference and improve radar’s application ability. Most existing classification methods for PolSAR imagery are based on manual features, such methods with fixed pattern has poor data adaptability and low feature utilization, if directly input to the classifier. Therefore, combining PolSAR data characteristics and deep network with auto-feature learning ability forms a new breakthrough direction. In fact, feature learning of deep network is to realize function approximation from data to label, through multi-layer accumulation, but finite layers limit the network’s mapping ability. According to manifold hypothesis, high-dimensional data exists in potential low-dimensional manifold and different types of data locates in different manifolds. Manifold learning can model core variables of the target, and separate different data’s manifold as much as possible, so as to complete data classification better. Therefore, taking manifold hypothesis as a starting point, nonlinear manifold learning integrated with fully convolutional networks for PolSAR image classification method is proposed in this paper. Firstly, high-dimensional polarized features are extracted based on scattering matrix and coherence matrix of original PolSAR data, whose compact representation is mined by manifold learning. Meanwhile, drawing on transfer learning, pre-trained Fully Convolutional Networks (FCN) model is utilized to learn deep spatial features of PolSAR imagery. Considering complementary advantages, weighted strategy is adopted to embed manifold representation into deep spatial features, which are input into support vector machine (SVM) classifier for final classification. A series of experiments on three PolSAR datasets have verified effectiveness and superiority of the proposed classification algorithm.


Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 434 ◽  
Author(s):  
Huilin Ge ◽  
Zhiyu Zhu ◽  
Kang Lou ◽  
Wei Wei ◽  
Runbang Liu ◽  
...  

Infrared image recognition technology can work day and night and has a long detection distance. However, the infrared objects have less prior information and external factors in the real-world environment easily interfere with them. Therefore, infrared object classification is a very challenging research area. Manifold learning can be used to improve the classification accuracy of infrared images in the manifold space. In this article, we propose a novel manifold learning algorithm for infrared object detection and classification. First, a manifold space is constructed with each pixel of the infrared object image as a dimension. Infrared images are represented as data points in this constructed manifold space. Next, we simulate the probability distribution information of infrared data points with the Gaussian distribution in the manifold space. Then, based on the Gaussian distribution information in the manifold space, the distribution characteristics of the data points of the infrared image in the low-dimensional space are derived. The proposed algorithm uses the Kullback-Leibler (KL) divergence to minimize the loss function between two symmetrical distributions, and finally completes the classification in the low-dimensional manifold space. The efficiency of the algorithm is validated on two public infrared image data sets. The experiments show that the proposed method has a 97.46% classification accuracy and competitive speed in regards to the analyzed data sets.


Algorithms ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 186
Author(s):  
Fayeem Aziz ◽  
Aaron S.W. Wong ◽  
Stephan Chalup

The aim of manifold learning is to extract low-dimensional manifolds from high-dimensional data. Manifold alignment is a variant of manifold learning that uses two or more datasets that are assumed to represent different high-dimensional representations of the same underlying manifold. Manifold alignment can be successful in detecting latent manifolds in cases where one version of the data alone is not sufficient to extract and establish a stable low-dimensional representation. The present study proposes a parallel deep autoencoder neural network architecture for manifold alignment and conducts a series of experiments using a protein-folding benchmark dataset and a suite of new datasets generated by simulating double-pendulum dynamics with underlying manifolds of dimensions 2, 3 and 4. The dimensionality and topological complexity of these latent manifolds are above those occurring in most previous studies. Our experimental results demonstrate that the parallel deep autoencoder performs in most cases better than the tested traditional methods of semi-supervised manifold alignment. We also show that the parallel deep autoencoder can process datasets of different input domains by aligning the manifolds extracted from kinematics parameters with those obtained from corresponding image data.


2013 ◽  
Vol 312 ◽  
pp. 650-654 ◽  
Author(s):  
Yi Lin He ◽  
Guang Bin Wang ◽  
Fu Ze Xu

Characteristic signals in rotating machinery fault diagnosis with the issues of complex and difficult to deal with, while the use of non-linear manifold learning method can effectively extract low-dimensional manifold characteristics embedded in the high-dimensional non-linear data. It greatly maintains the overall geometric structure of the signals and improves the efficiency and reliability of the rotating machinery fault diagnosis. According to the development prospects of manifold learning, this paper describes four classical manifold learning methods and each advantages and disadvantages. It reviews the research status and application of fault diagnosis based on manifold learning, as well as future direction of researches in the field of manifold learning fault diagnosis.


Sign in / Sign up

Export Citation Format

Share Document