scholarly journals DATA COMPRESSION AND REGRESSION THROUGH LOCAL PRINCIPAL CURVES AND SURFACES

2010 ◽  
Vol 20 (03) ◽  
pp. 177-192 ◽  
Author(s):  
JOCHEN EINBECK ◽  
LUDGER EVERS ◽  
BENEDICT POWELL

We consider principal curves and surfaces in the context of multivariate regression modelling. For predictor spaces featuring complex dependency patterns between the involved variables, the intrinsic dimensionality of the data tends to be very small due to the high redundancy induced by the dependencies. In situations of this type, it is useful to approximate the high-dimensional predictor space through a low-dimensional manifold (i.e., a curve or a surface), and use the projections onto the manifold as compressed predictors in the regression problem. In the case that the intrinsic dimensionality of the predictor space equals one, we use the local principal curve algorithm for the the compression step. We provide a novel algorithm which extends this idea to local principal surfaces, thus covering cases of an intrinsic dimensionality equal to two, which is in principle extendible to manifolds of arbitrary dimension. We motivate and apply the novel techniques using astrophysical and oceanographic data examples.

2021 ◽  
Vol 17 (11) ◽  
pp. e1008591
Author(s):  
Ege Altan ◽  
Sara A. Solla ◽  
Lee E. Miller ◽  
Eric J. Perreault

It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms’ accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the “Joint Autoencoder”, which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.


2017 ◽  
Vol 19 (12) ◽  
pp. 125012 ◽  
Author(s):  
Carlos Floyd ◽  
Christopher Jarzynski ◽  
Garegin Papoian

2020 ◽  
Author(s):  
Wei Guo ◽  
Jie J. Zhang ◽  
Jonathan P. Newman ◽  
Matthew A. Wilson

AbstractLatent learning allows the brain the transform experiences into cognitive maps, a form of implicit memory, without reinforced training. Its mechanism is unclear. We tracked the internal states of the hippocampal neural ensembles and discovered that during latent learning of a spatial map, the state space evolved into a low-dimensional manifold that topologically resembled the physical environment. This process requires repeated experiences and sleep in-between. Further investigations revealed that a subset of hippocampal neurons, instead of rapidly forming place fields in a novel environment, remained weakly tuned but gradually developed correlated activity with other neurons. These ‘weakly spatial’ neurons bond activity of neurons with stronger spatial tuning, linking discrete place fields into a map that supports flexible navigation.


2018 ◽  
Vol 21 (5) ◽  
pp. 824-837 ◽  
Author(s):  
Jian Huang ◽  
Gordon McTaggart-Cowan ◽  
Sandeep Munshi

This article describes the application of a modified first-order conditional moment closure model used in conjunction with the trajectory-generated low-dimensional manifold method in large-eddy simulation of pilot ignited high-pressure direct injection natural gas combustion in a heavy-duty diesel engine. The article starts with a review of the intrinsic low-dimensional manifold method for reducing detailed chemistry and various formulations for the construction of such manifolds. It is followed by a brief review of the conditional moment closure method for modelling the interaction between turbulence and combustion chemistry. The high computational cost associated with the direct implementation of the basic conditional moment closure model was discussed. The article then describes the formulation of a modified approach to solve the conditional moment closure equation, whose reaction source terms for the conditional mass fractions for species were obtained by projecting the turbulent perturbation onto the reaction manifold. The main model assumptions were explained and the resulting limitations were discussed. A numerical experiment was conducted to examine the validity the model assumptions. The model was then implemented in a combustion computational fluid dynamics solver developed on an open-source computational fluid dynamics platform. Non-reactive jet simulations were first conducted and the results were compared to the experimental measurement from a high-pressure visualization chamber to verify that the jet penetration under engine relevant conditions was correctly predicted. The model was then used to simulate natural gas combustion in a heavy-duty diesel engine equipped with a high-pressure direct injection system. The simulation results were compared with the experimental measurement from a research engine to verify the accuracy of the model for both the combustion rate and engine-out emissions.


2020 ◽  
Vol 371 ◽  
pp. 108-123 ◽  
Author(s):  
Ruiqiang He ◽  
Xiangchu Feng ◽  
Weiwei Wang ◽  
Xiaolong Zhu ◽  
Chunyu Yang

2021 ◽  
Author(s):  
Mikhail Andronov ◽  
Maxim Fedorov ◽  
Sergey Sosnin

<div>Humans prefer visual representations for the analysis of large databases. In this work, we suggest a method for the visualization of the chemical reaction space. Our technique uses the t-SNE approach that is parameterized by a deep neural network (parametric t-SNE). We demonstrated that the parametric t-SNE combined with reaction difference fingerprints can provide a tool for the projection of chemical reactions onto a low-dimensional manifold for easy exploration of reaction space. We showed that the global reaction landscape, been projected onto a 2D plane, corresponds well with already known reaction types. The application of a pretrained parametric t-SNE model to new reactions allows chemists to study these reactions on a global reaction space. We validated the feasibility of this approach for two marketed drugs: darunavir and oseltamivir. We believe that our method can help explore reaction space and inspire chemists to find new reactions and synthetic ways. </div><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document