Feature Dimension Reduction and Graph Based Ranking Based Image Classification

2013 ◽  
Vol 380-384 ◽  
pp. 4035-4038 ◽  
Author(s):  
Nan Yao ◽  
Feng Qian ◽  
Zuo Lei Sun

Dimensionality reduction (DR) of image features plays an important role in image retrieval and classification tasks. Recently, two types of methods have been proposed to improve both the accuracy and efficiency for the dimensionality reduction problem. One uses Non-negative matrix factorization (NMF) to describe the image distribution on the space of base matrix. Another one for dimension reduction trains a subspace projection matrix to project original data space into some low-dimensional subspaces which have deep architecture, so that the low-dimensional codes would be learned. At the same time, the graph based similarity learning algorithm which tries to exploit contextual information for improving the effectiveness of image rankings is also proposed for image class and retrieval problem. In this paper, after above two methods mentioned are utilized to reduce the high-dimensional features of images respectively, we learn the graph based similarity for the image classification problem. This paper compares the proposed approach with other approaches on an image database.

2013 ◽  
Vol 645 ◽  
pp. 192-195 ◽  
Author(s):  
Xiao Zhou Chen

Dimension reduction is an important issue to understand microarray data. In this study, we proposed a efficient approach for dimensionality reduction of microarray data. Our method allows to apply the manifold learning algorithm to analyses dimensionality reduction of microarray data. The intra-/inter-category distances were used as the criteria to quantitatively evaluate the effects of data dimensionality reduction. Colon cancer and leukaemia gene expression datasets are selected for our investigation. When the neighborhood parameter was effectivly set, all the intrinsic dimension numbers of data sets were low. Therefore, manifold learning is used to study microarray data in the low-dimensional projection space. Our results indicate that Manifold learning method possesses better effects than the linear methods in analysis of microarray data, which is suitable for clinical diagnosis and other medical applications.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Ziqiang Wang ◽  
Xia Sun ◽  
Lijun Sun ◽  
Yuchun Huang

In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP) for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.


2020 ◽  
Vol 49 (3) ◽  
pp. 421-437
Author(s):  
Genggeng Liu ◽  
Lin Xie ◽  
Chi-Hua Chen

Dimensionality reduction plays an important role in the data processing of machine learning and data mining, which makes the processing of high-dimensional data more efficient. Dimensionality reduction can extract the low-dimensional feature representation of high-dimensional data, and an effective dimensionality reduction method can not only extract most of the useful information of the original data, but also realize the function of removing useless noise. The dimensionality reduction methods can be applied to all types of data, especially image data. Although the supervised learning method has achieved good results in the application of dimensionality reduction, its performance depends on the number of labeled training samples. With the growing of information from internet, marking the data requires more resources and is more difficult. Therefore, using unsupervised learning to learn the feature of data has extremely important research value. In this paper, an unsupervised multilayered variational auto-encoder model is studied in the text data, so that the high-dimensional feature to the low-dimensional feature becomes efficient and the low-dimensional feature can retain mainly information as much as possible. Low-dimensional feature obtained by different dimensionality reduction methods are used to compare with the dimensionality reduction results of variational auto-encoder (VAE), and the method can be significantly improved over other comparison methods.


Author(s):  
Akira Imakura ◽  
Momo Matsuda ◽  
Xiucai Ye ◽  
Tetsuya Sakurai

Dimensionality reduction methods that project highdimensional data to a low-dimensional space by matrix trace optimization are widely used for clustering and classification. The matrix trace optimization problem leads to an eigenvalue problem for a low-dimensional subspace construction, preserving certain properties of the original data. However, most of the existing methods use only a few eigenvectors to construct the low-dimensional space, which may lead to a loss of useful information for achieving successful classification. Herein, to overcome the deficiency of the information loss, we propose a novel complex moment-based supervised eigenmap including multiple eigenvectors for dimensionality reduction. Furthermore, the proposed method provides a general formulation for matrix trace optimization methods to incorporate with ridge regression, which models the linear dependency between covariate variables and univariate labels. To reduce the computational complexity, we also propose an efficient and parallel implementation of the proposed method. Numerical experiments indicate that the proposed method is competitive compared with the existing dimensionality reduction methods for the recognition performance. Additionally, the proposed method exhibits high parallel efficiency.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Pei Heng Li ◽  
Taeho Lee ◽  
Hee Yong Youn

Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to effectively consider the trade-off between the complexity and efficiency, a robust L2,p-norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.


Author(s):  
Xiaofeng Zhu ◽  
Cong Lei ◽  
Hao Yu ◽  
Yonggang Li ◽  
Jiangzhang Gan ◽  
...  

In this paper, we propose conducting Robust Graph Dimensionality Reduction (RGDR) by learning a transformation matrix to map original high-dimensional data into their low-dimensional intrinsic space without the influence of outliers. To do this, we propose simultaneously 1) adaptively learning three variables, \ie a reverse graph embedding of original data, a transformation matrix, and a graph matrix preserving the local similarity of original data in their low-dimensional intrinsic space; and 2) employing robust estimators to  avoid outliers involving the processes of optimizing these three matrices. As a result, original data are cleaned by two strategies, \ie a prediction of original data based on three resulting variables and robust estimators, so that the transformation matrix can be learnt from accurately estimated intrinsic space with the helping of the reverse graph embedding and the graph matrix. Moreover, we propose a new optimization algorithm to the resulting objective function as well as theoretically prove the convergence of our optimization algorithm. Experimental results indicated that our proposed method outperformed all the comparison methods in terms of different classification tasks.


Author(s):  
I. Sharif ◽  
S. Khare

With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.


Author(s):  
XIAN'EN QIU ◽  
ZHONG ZHAO ◽  
GUOCAN FENG ◽  
PATRICK S. P. WANG

Recently, many dimensionality reduction (DR) algorithms have been developed, which are successfully applied to feature extraction and representation in pattern classification. However, many applications need to re-project the features to the original space. Unfortunately, most DR algorithms cannot perform reconstruction. Based on the manifold assumption, this paper proposes a General Manifold Reconstruction Framework (GMRF) to perform the reconstruction of the original data from the low dimensional DR results. Comparing with the existing reconstruction algorithms, the framework has two significant advantages. First, the proposed framework is independent of DR algorithm. That is to say, no matter what DR algorithm is used, the framework can recover the structure of the original data from the DR results. Second, the framework is space saving, which means it does not need to store any training sample after training. The storage space GMRF needed for reconstruction is far less than that of the training samples. Experiments on different dataset demonstrate that the framework performs well in the reconstruction.


Recently, the demand for computer vision techniques is continuously rising because of the development of techniques in decision making pertaining to health sector. Image processing is a subset of computer vision which makes use of algorithms to perform vision emulation to recognize objects. In this study a novel convolutional neural network is configured based on deep learning to classifying Chest x-ray images into five major classes. It addresses an issue of insufficiency in medical images for employing deep learning for image classification. A new augmentation technique superimposing of images helps to generate more new samples from the available images using label-preserving transformations. Data augmentation technique can generate new sample data from the original data using various transforming strategies. Therefore the data augmentation technique helps in accumulating enough data for processing to obtain better performance. The main objective of superimposing of two images is to minimize redundancy and uncertainty in the output image. Therefore the superimposing carried out with original image and a set of various augmented image to obtain better accuracy. Later results of various superimposing techniques are compared and evaluated to demonstrate the better techniques. It is concluded that the proposed techniques can obtain better performance in medical image classification problem.


2008 ◽  
Vol 65 (6) ◽  
pp. 1941-1954 ◽  
Author(s):  
Illia Horenko

Abstract A problem of simultaneous dimension reduction and identification of hidden attractive manifolds in multidimensional data with noise is considered. The problem is approached in two consecutive steps: (i) embedding the original data in a sufficiently high-dimensional extended space in a way proposed by Takens in his embedding theorem, followed by (ii) a minimization of the residual functional. The residual functional is constructed to measure the distance between the original data in extended space and their reconstruction based on a low-dimensional description. The reduced representation of the analyzed data results from projection onto a fixed number of unknown low-dimensional manifolds. Two specific forms of the residual functional are proposed, defining two different types of essential coordinates: (i) localized essential orthogonal functions (EOFs) and (ii) localized functions called principal original components (POCs). The application of the framework is exemplified both on a Lorenz attractor model with measurement noise and on historical air temperature data. It is demonstrated how the new method can be used for the elimination of noise and identification of the seasonal low-frequency components in meteorological data. An application of the proposed POCs in the context of the low-dimensional predictive models construction is presented.


Sign in / Sign up

Export Citation Format

Share Document