Overlapping tiling for fast random access of low-dimensional data from high-dimensional datasets

Author(s):  
Zihong Fan ◽  
Antonio Ortega
2021 ◽  
Author(s):  
Mark M. Dekker ◽  
Arthur S. C. França ◽  
Debabrata Panja ◽  
Michael X Cohen

AbstractBackgroundWith the growing size and richness of neuroscience datasets in terms of dimension, volume, and resolution, identifying spatiotemporal patterns in those datasets is increasingly important. Multivariate dimension-reduction methods are particularly adept at addressing these challenges.New MethodIn this paper, we propose a novel method, which we refer to as Principal Louvain Clustering (PLC), to identify clusters in a low-dimensional data subspace, based on time-varying trajectories of spectral dynamics across multisite local field potential (LFP) recordings in awake behaving mice. Data were recorded from prefrontal cortex, hippocampus, and parietal cortex in eleven mice while they explored novel and familiar environments.ResultsPLC-identified subspaces and clusters showed high consistency across animals, and were modulated by the animals’ ongoing behavior.ConclusionsPLC adds to an important growing literature on methods for characterizing dynamics in high-dimensional datasets, using a smaller number of parameters. The method is also applicable to other kinds of datasets, such as EEG or MEG.


2017 ◽  
Vol 27 (06) ◽  
pp. 1750024 ◽  
Author(s):  
Héctor Quintián ◽  
Emilio Corchado

In this research, a novel family of learning rules called Beta Hebbian Learning (BHL) is thoroughly investigated to extract information from high-dimensional datasets by projecting the data onto low-dimensional (typically two dimensional) subspaces, improving the existing exploratory methods by providing a clear representation of data’s internal structure. BHL applies a family of learning rules derived from the Probability Density Function (PDF) of the residual based on the beta distribution. This family of rules may be called Hebbian in that all use a simple multiplication of the output of the neural network with some function of the residuals after feedback. The derived learning rules can be linked to an adaptive form of Exploratory Projection Pursuit and with artificial distributions, the networks perform as the theory suggests they should: the use of different learning rules derived from different PDFs allows the identification of “interesting” dimensions (as far from the Gaussian distribution as possible) in high-dimensional datasets. This novel algorithm, BHL, has been tested over seven artificial datasets to study the behavior of BHL parameters, and was later applied successfully over four real datasets, comparing its results, in terms of performance, with other well-known Exploratory and projection models such as Maximum Likelihood Hebbian Learning (MLHL), Locally-Linear Embedding (LLE), Curvilinear Component Analysis (CCA), Isomap and Neural Principal Component Analysis (Neural PCA).


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1646
Author(s):  
Alireza Entezami ◽  
Hassan Sarmadi ◽  
Behshid Behkamal ◽  
Stefano Mariani

A major challenge in structural health monitoring (SHM) is the efficient handling of big data, namely of high-dimensional datasets, when damage detection under environmental variability is being assessed. To address this issue, a novel data-driven approach to early damage detection is proposed here. The approach is based on an efficient partitioning of the dataset, gathering the sensor recordings, and on classical multidimensional scaling (CMDS). The partitioning procedure aims at moving towards a low-dimensional feature space; the CMDS algorithm is instead exploited to set the coordinates in the mentioned low-dimensional space, and define damage indices through norms of the said coordinates. The proposed approach is shown to efficiently and robustly address the challenges linked to high-dimensional datasets and environmental variability. Results related to two large-scale test cases are reported: the ASCE structure, and the Z24 bridge. A high sensitivity to damage and a limited (if any) number of false alarms and false detections are reported, testifying the efficacy of the proposed data-driven approach.


2018 ◽  
Vol 24 (1) ◽  
pp. 236-245 ◽  
Author(s):  
Jiazhi Xia ◽  
Fenjin Ye ◽  
Wei Chen ◽  
Yusi Wang ◽  
Weifeng Chen ◽  
...  

2021 ◽  
Author(s):  
Zahra Atashgahi ◽  
Ghada Sokar ◽  
Tim van der Lee ◽  
Elena Mocanu ◽  
Decebal Constantin Mocanu ◽  
...  

AbstractMajor complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem. Most of the existing feature selection methods are computationally inefficient; inefficient algorithms lead to high energy consumption, which is not desirable for devices with limited computational and energy resources. In this paper, a novel and flexible method for unsupervised feature selection is proposed. This method, named QuickSelection (The code is available at: https://github.com/zahraatashgahi/QuickSelection), introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance. This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously. We implement QuickSelection in a purely sparse manner as opposed to the typical approach of using a binary mask over connections to simulate sparsity. It results in a considerable speed increase and memory reduction. When tested on several benchmark datasets, including five low-dimensional and three high-dimensional datasets, the proposed method is able to achieve the best trade-off of classification and clustering accuracy, running time, and maximum memory usage, among widely used approaches for feature selection. Besides, our proposed method requires the least amount of energy among the state-of-the-art autoencoder-based feature selection methods.


2020 ◽  
Vol 10 (5) ◽  
pp. 1797 ◽  
Author(s):  
Mera Kartika Delimayanti ◽  
Bedy Purnama ◽  
Ngoc Giang Nguyen ◽  
Mohammad Reza Faisal ◽  
Kunti Robiatul Mahmudah ◽  
...  

Manual classification of sleep stage is a time-consuming but necessary step in the diagnosis and treatment of sleep disorders, and its automation has been an area of active study. The previous works have shown that low dimensional fast Fourier transform (FFT) features and many machine learning algorithms have been applied. In this paper, we demonstrate utilization of features extracted from EEG signals via FFT to improve the performance of automated sleep stage classification through machine learning methods. Unlike previous works using FFT, we incorporated thousands of FFT features in order to classify the sleep stages into 2–6 classes. Using the expanded version of Sleep-EDF dataset with 61 recordings, our method outperformed other state-of-the art methods. This result indicates that high dimensional FFT features in combination with a simple feature selection is effective for the improvement of automated sleep stage classification.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 743
Author(s):  
Xi Liu ◽  
Shuhang Chen ◽  
Xiang Shen ◽  
Xiang Zhang ◽  
Yiwen Wang

Neural signal decoding is a critical technology in brain machine interface (BMI) to interpret movement intention from multi-neural activity collected from paralyzed patients. As a commonly-used decoding algorithm, the Kalman filter is often applied to derive the movement states from high-dimensional neural firing observation. However, its performance is limited and less effective for noisy nonlinear neural systems with high-dimensional measurements. In this paper, we propose a nonlinear maximum correntropy information filter, aiming at better state estimation in the filtering process for a noisy high-dimensional measurement system. We reconstruct the measurement model between the high-dimensional measurements and low-dimensional states using the neural network, and derive the state estimation using the correntropy criterion to cope with the non-Gaussian noise and eliminate large initial uncertainty. Moreover, analyses of convergence and robustness are given. The effectiveness of the proposed algorithm is evaluated by applying it on multiple segments of neural spiking data from two rats to interpret the movement states when the subjects perform a two-lever discrimination task. Our results demonstrate better and more robust state estimation performance when compared with other filters.


Author(s):  
Fumiya Akasaka ◽  
Kazuki Fujita ◽  
Yoshiki Shimomura

This paper proposes the PSS Business Case Map as a tool to support designers’ idea generation in PSS design. The map visualizes the similarities among PSS business cases in a two-dimensional diagram. To make the map, PSS business cases are first collected by conducting, for example, a literature survey. The collected business cases are then classified from multiple aspects that characterize each case such as its product type, service type, target customer, and so on. Based on the results of this classification, the similarities among the cases are calculated and visualized by using the Self-Organizing Map (SOM) technique. A SOM is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional) view from high-dimensional data. The visualization result is offered to designers in a form of a two-dimensional map, which is called the PSS Business Case Map. By using the map, designers can figure out the position of their current business and can acquire ideas for the servitization of their business.


Complexity ◽  
2003 ◽  
Vol 8 (4) ◽  
pp. 39-50 ◽  
Author(s):  
Stefan Häusler ◽  
Henry Markram ◽  
Wolfgang Maass

Sign in / Sign up

Export Citation Format

Share Document