scholarly journals Nonlinear dimensionality reduction for the acoustic field measured by a linear sensor array

2019 ◽  
Vol 283 ◽  
pp. 07009
Author(s):  
Xinyao Zhang ◽  
Pengyu Wang ◽  
Ning Wang

Dimensionality reduction is one of the central problems in machine learning and pattern recognition, which aims to develop a compact representation for complex data from high-dimensional observations. Here, we apply a nonlinear manifold learning algorithm, called local tangent space alignment (LTSA) algorithm, to high-dimensional acoustic observations and achieve nonlinear dimensionality reduction for the acoustic field measured by a linear senor array. By dimensionality reduction, the underlying physical degrees of freedom of acoustic field, such as the variations of sound source location and sound speed profiles, can be discovered. Two simulations are presented to verify the validity of the approach.

2003 ◽  
Vol 15 (6) ◽  
pp. 1373-1396 ◽  
Author(s):  
Mikhail Belkin ◽  
Partha Niyogi

One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.


2005 ◽  
Vol 4 (1) ◽  
pp. 22-31 ◽  
Author(s):  
Timo Similä

One of the main tasks in exploratory data analysis is to create an appropriate representation for complex data. In this paper, the problem of creating a representation for observations lying on a low-dimensional manifold embedded in high-dimensional coordinates is considered. We propose a modification of the Self-organizing map (SOM) algorithm that is able to learn the manifold structure in the high-dimensional observation coordinates. Any manifold learning algorithm may be incorporated to the proposed training strategy to guide the map onto the manifold surface instead of becoming trapped in local minima. In this paper, the Locally linear embedding algorithm is adopted. We use the proposed method successfully on several data sets with manifold geometry including an illustrative example of a surface as well as image data. We also show with other experiments that the advantage of the method over the basic SOM is restricted to this specific type of data.


2020 ◽  
Author(s):  
Jacob M. Graving ◽  
Iain D. Couzin

AbstractScientific datasets are growing rapidly in scale and complexity. Consequently, the task of understanding these data to answer scientific questions increasingly requires the use of compression algorithms that reduce dimensionality by combining correlated features and cluster similar observations to summarize large datasets. Here we introduce a method for both dimension reduction and clustering called VAE-SNE (variational autoencoder stochastic neighbor embedding). Our model combines elements from deep learning, probabilistic inference, and manifold learning to produce interpretable compressed representations while also readily scaling to tens-of-millions of observations. Unlike existing methods, VAE-SNE simultaneously compresses high-dimensional data and automatically learns a distribution of clusters within the data — without the need to manually select the number of clusters. This naturally creates a multi-scale representation, which makes it straightforward to generate coarse-grained descriptions for large subsets of related observations and select specific regions of interest for further analysis. VAE-SNE can also quickly and easily embed new samples, detect outliers, and can be optimized with small batches of data, which makes it possible to compress datasets that are otherwise too large to fit into memory. We evaluate VAE-SNE as a general purpose method for dimensionality reduction by applying it to multiple real-world datasets and by comparing its performance with existing methods for dimensionality reduction. We find that VAE-SNE produces high-quality compressed representations with results that are on par with existing nonlinear dimensionality reduction algorithms. As a practical example, we demonstrate how the cluster distribution learned by VAE-SNE can be used for unsupervised action recognition to detect and classify repeated motifs of stereotyped behavior in high-dimensional timeseries data. Finally, we also introduce variants of VAE-SNE for embedding data in polar (spherical) coordinates and for embedding image data from raw pixels. VAE-SNE is a robust, feature-rich, and scalable method with broad applicability to a range of datasets in the life sciences and beyond.


2020 ◽  
Author(s):  
Kevin C. VanHorn ◽  
Murat Can Çobanoğlu

AbstractDimensionality reduction (DR) is often integral when analyzing high-dimensional data across scientific, economic, and social networking applications. For data with a high order of complexity, nonlinear approaches are often needed to identify and represent the most important components. We propose a novel DR approach that can incorporate a known underlying hierarchy. Specifically, we extend the widely used t-Distributed Stochastic Neighbor Embedding technique (t-SNE) to include hierarchical information and demonstrate its use with known or unknown class labels. We term this approach “H-tSNE.” Such a strategy can aid in discovering and understanding underlying patterns of a dataset that is heavily influenced by parent-child relationships. Without integrating information that is known a priori, we suggest that DR cannot function as effectively. In this regard, we argue for a DR approach that enables the user to incorporate known, relevant relationships even if their representation is weakly expressed in the dataset.Availabilitygithub.com/Cobanoglu-Lab/h-tSNE


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yongbin Liu ◽  
Jingjie Wang ◽  
Wei Bai

Dimensionality reduction of images with high-dimensional nonlinear structure is the key to improving the recognition rate. Although some traditional algorithms have achieved some results in the process of dimensionality reduction, they also expose their respective defects. In order to achieve the ideal effect of high-dimensional nonlinear image recognition, based on the analysis of the traditional dimensionality reduction algorithm and refining its advantages, an image recognition technology based on the nonlinear dimensionality reduction method is proposed. As an effective nonlinear feature extraction method, the nonlinear dimensionality reduction method can find the nonlinear structure of datasets and maintain the intrinsic structure of data. Applying the nonlinear dimensionality reduction method to image recognition is to divide the input image into blocks, take it as a dataset in high-dimensional space, reduce the dimension of its structure, and obtain the low-dimensional expression vector of its eigenstructure so that the problem of image recognition can be carried out in a lower dimension. Thus, the computational complexity can be reduced, the recognition accuracy can be improved, and it is convenient for further processing such as image recognition and search. The defects of traditional algorithms are solved, and the commodity price recognition and simulation experiments are carried out, which verifies the feasibility of image recognition technology based on the nonlinear dimensionality reduction method in commodity price recognition.


Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 109 ◽  
Author(s):  
Marian B. Gorzałczany ◽  
Filip Rudziński

In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated.


2014 ◽  
Vol 1014 ◽  
pp. 375-378 ◽  
Author(s):  
Ri Sheng Huang

To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. This paper proposes an improved SLLE algorithm, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody so as to recognize three emotions including anger, joy and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.97% with only less 9 embedded features, making 11.64% improvement over SLLE algorithm.


2011 ◽  
Vol 219-220 ◽  
pp. 994-998 ◽  
Author(s):  
Xian Lin Zou ◽  
Qing Sheng Zhu ◽  
Rui Long Yang

Isomapis a classic and efficient manifold learning algorithm, which aims at finding the intrinsic structure hidden in high dimensional data. Only deficiency appeared in this algorithm is that it requires user to input a free parameterkwhich is closely related to the success of unfolding the true intrinsic structure and the algorithm’s topological stability. Here, we propose a novel and simplek-nn basedconcept: natural nearest neighbor (3N), which is independent of parameterk, so as to addressing the longstanding problem of how to automatically choosing the only free parameterkin manifold learning algorithms so far, and implementing completely unsupervised learning algorithm3N-Isomapfor nonlinear dimensionality reduction without the use of any priori information about the intrinsic structure. Experiment results show that3N-Isomapis a more practical and simple algorithm thanIsomap.


Sign in / Sign up

Export Citation Format

Share Document