Non-Negative Based Locally Sparse Representation for Classification

2013 ◽  
Vol 677 ◽  
pp. 502-507
Author(s):  
Kang Hua Hui ◽  
Chun Li Li ◽  
Xiao Rong Feng ◽  
Xue Yang Wang

In this paper, a new method is proposed, which can be considered as the combination of sparse representation based classification (SRC) and KNN classifier. In detail, with the assumption of locally linear embedding coming into existence, the proposed method achieves the classification goal via non-negative locally sparse representation, combining the reconstruction property and the sparsity of SRC and the discrimination power included in KNN. Compared to SRC, the proposed method has obvious discrimination and is more acceptable for the real image data without those preconditions difficult to satisfy. Moreover, it is more suitable for the classification of low dimensional data dimensionally reduced by dimensionality reduction methods, especially those methods obtaining the low dimensional and neighborhood preserving embeddings of high dimensional data. The experiments on MNIST is also presented, which supports the above arguments.

2020 ◽  
Vol 49 (3) ◽  
pp. 421-437
Author(s):  
Genggeng Liu ◽  
Lin Xie ◽  
Chi-Hua Chen

Dimensionality reduction plays an important role in the data processing of machine learning and data mining, which makes the processing of high-dimensional data more efficient. Dimensionality reduction can extract the low-dimensional feature representation of high-dimensional data, and an effective dimensionality reduction method can not only extract most of the useful information of the original data, but also realize the function of removing useless noise. The dimensionality reduction methods can be applied to all types of data, especially image data. Although the supervised learning method has achieved good results in the application of dimensionality reduction, its performance depends on the number of labeled training samples. With the growing of information from internet, marking the data requires more resources and is more difficult. Therefore, using unsupervised learning to learn the feature of data has extremely important research value. In this paper, an unsupervised multilayered variational auto-encoder model is studied in the text data, so that the high-dimensional feature to the low-dimensional feature becomes efficient and the low-dimensional feature can retain mainly information as much as possible. Low-dimensional feature obtained by different dimensionality reduction methods are used to compare with the dimensionality reduction results of variational auto-encoder (VAE), and the method can be significantly improved over other comparison methods.


2005 ◽  
Vol 4 (1) ◽  
pp. 22-31 ◽  
Author(s):  
Timo Similä

One of the main tasks in exploratory data analysis is to create an appropriate representation for complex data. In this paper, the problem of creating a representation for observations lying on a low-dimensional manifold embedded in high-dimensional coordinates is considered. We propose a modification of the Self-organizing map (SOM) algorithm that is able to learn the manifold structure in the high-dimensional observation coordinates. Any manifold learning algorithm may be incorporated to the proposed training strategy to guide the map onto the manifold surface instead of becoming trapped in local minima. In this paper, the Locally linear embedding algorithm is adopted. We use the proposed method successfully on several data sets with manifold geometry including an illustrative example of a surface as well as image data. We also show with other experiments that the advantage of the method over the basic SOM is restricted to this specific type of data.


2014 ◽  
Vol 1014 ◽  
pp. 375-378 ◽  
Author(s):  
Ri Sheng Huang

To improve effectively the performance on speech emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech feature data lying on a nonlinear manifold embedded in high-dimensional acoustic space. This paper proposes an improved SLLE algorithm, which enhances the discriminating power of low-dimensional embedded data and possesses the optimal generalization ability. The proposed algorithm is used to conduct nonlinear dimensionality reduction for 48-dimensional speech emotional feature data including prosody so as to recognize three emotions including anger, joy and neutral. Experimental results on the natural speech emotional database demonstrate that the proposed algorithm obtains the highest accuracy of 90.97% with only less 9 embedded features, making 11.64% improvement over SLLE algorithm.


Author(s):  
Sang-Il Choi ◽  
Sang Tae Choi ◽  
Haanju Yoo

We propose a method that generates input features to effectively classify low-dimensional data. To do this, we first generate high-order terms for the input features of the original low-dimensional data to form a candidate set of new input features. Then, the discrimination power of the candidate input features is quantitatively evaluated by calculating the ‘discrimination distance’ for each candidate feature. As a result, only candidates with a large amount of discriminative information are selected to create a new input feature vector, and the discriminant features that are to be used as input to the classifier are extracted from the new input feature vectors by using a subspace discriminant analysis. Experiments on low-dimensional data sets in the UCI machine learning repository and several kinds of low-resolution facial image data show that the proposed method improves the classification performance of low-dimensional data by generating features.


Author(s):  
JING CHEN ◽  
ZHENGMING MA

The goal of nonlinear dimensionality reduction is to find the meaningful low dimensional structure of the nonlinear manifold from the high dimensional data. As a classic method of nonlinear dimensional reduction, locally linear embedding (LLE) is more and more attractive to researchers due to its ability to deal with large amounts of high dimensional data and its noniterative way of finding the embeddings. However, several problems in the LLE algorithm still remain open, such as its sensitivity to noise, inevitable ill-conditioned eigenproblems, the inability to deal with the novel data, etc. The existing extensions are comprehensively reviewed and discussed classifying into different categories in this paper. Their strategies, advantages/disadvantages and performances are elaborated. By generalizing different tactics in various extensions related to different stages of LLE and evaluating their performances, several promising directions for future research have been suggested.


2020 ◽  
Vol 10 (5) ◽  
pp. 1797 ◽  
Author(s):  
Mera Kartika Delimayanti ◽  
Bedy Purnama ◽  
Ngoc Giang Nguyen ◽  
Mohammad Reza Faisal ◽  
Kunti Robiatul Mahmudah ◽  
...  

Manual classification of sleep stage is a time-consuming but necessary step in the diagnosis and treatment of sleep disorders, and its automation has been an area of active study. The previous works have shown that low dimensional fast Fourier transform (FFT) features and many machine learning algorithms have been applied. In this paper, we demonstrate utilization of features extracted from EEG signals via FFT to improve the performance of automated sleep stage classification through machine learning methods. Unlike previous works using FFT, we incorporated thousands of FFT features in order to classify the sleep stages into 2–6 classes. Using the expanded version of Sleep-EDF dataset with 61 recordings, our method outperformed other state-of-the art methods. This result indicates that high dimensional FFT features in combination with a simple feature selection is effective for the improvement of automated sleep stage classification.


Algorithms ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 186
Author(s):  
Fayeem Aziz ◽  
Aaron S.W. Wong ◽  
Stephan Chalup

The aim of manifold learning is to extract low-dimensional manifolds from high-dimensional data. Manifold alignment is a variant of manifold learning that uses two or more datasets that are assumed to represent different high-dimensional representations of the same underlying manifold. Manifold alignment can be successful in detecting latent manifolds in cases where one version of the data alone is not sufficient to extract and establish a stable low-dimensional representation. The present study proposes a parallel deep autoencoder neural network architecture for manifold alignment and conducts a series of experiments using a protein-folding benchmark dataset and a suite of new datasets generated by simulating double-pendulum dynamics with underlying manifolds of dimensions 2, 3 and 4. The dimensionality and topological complexity of these latent manifolds are above those occurring in most previous studies. Our experimental results demonstrate that the parallel deep autoencoder performs in most cases better than the tested traditional methods of semi-supervised manifold alignment. We also show that the parallel deep autoencoder can process datasets of different input domains by aligning the manifolds extracted from kinematics parameters with those obtained from corresponding image data.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Mengwan Wei ◽  
Yongzhao Du ◽  
Xiuming Wu ◽  
Qichen Su ◽  
Jianqing Zhu ◽  
...  

The classification of benign and malignant based on ultrasound images is of great value because breast cancer is an enormous threat to women’s health worldwide. Although both texture and morphological features are crucial representations of ultrasound breast tumor images, their straightforward combination brings little effect for improving the classification of benign and malignant since high-dimensional texture features are too aggressive so that drown out the effect of low-dimensional morphological features. For that, an efficient texture and morphological feature combing method is proposed to improve the classification of benign and malignant. Firstly, both texture (i.e., local binary patterns (LBP), histogram of oriented gradients (HOG), and gray-level co-occurrence matrixes (GLCM)) and morphological (i.e., shape complexities) features of breast ultrasound images are extracted. Secondly, a support vector machine (SVM) classifier working on texture features is trained, and a naive Bayes (NB) classifier acting on morphological features is designed, in order to exert the discriminative power of texture features and morphological features, respectively. Thirdly, the classification scores of the two classifiers (i.e., SVM and NB) are weighted fused to obtain the final classification result. The low-dimensional nonparameterized NB classifier is effectively control the parameter complexity of the entire classification system combine with the high-dimensional parametric SVM classifier. Consequently, texture and morphological features are efficiently combined. Comprehensive experimental analyses are presented, and the proposed method obtains a 91.11% accuracy, a 94.34% sensitivity, and an 86.49% specificity, which outperforms many related benign and malignant breast tumor classification methods.


2011 ◽  
Vol 38 (10) ◽  
pp. 13472-13474 ◽  
Author(s):  
J.M. Nichols ◽  
F. Bucholtz ◽  
B. Nousain

Sign in / Sign up

Export Citation Format

Share Document