ADAPTIVE KERNEL DISCRIMINANT ANALYSIS AND ITS APPLICATIONS ON PATTERN RECOGNITION

2006 ◽  
Vol 03 (04) ◽  
pp. 329-337
Author(s):  
JUN-BAO LI ◽  
JENG-SHYANG PAN

In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. In this paper, we present an extension of KFD method based on the data-dependent kernel, called the adaptive kernel discriminant analysis (AKDA), for feature extraction and pattern classification. AKDA is more adaptive to the input data than KDA owing to the optimization of projection from input space to feature space with the data-dependent kernel, which enhances the performance of KDA. Experimental results on ORL, Yale and MNIST database show that the proposed AKDA gives the higher performance than KDA.

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 114
Author(s):  
Tiziano Zarra ◽  
Mark Gino K. Galang ◽  
Florencio C. Ballesteros ◽  
Vincenzo Belgiorno ◽  
Vincenzo Naddeo

Instrumental odour monitoring systems (IOMS) are intelligent electronic sensing tools for which the primary application is the generation of odour metrics that are indicators of odour as perceived by human observers. The quality of the odour sensor signal, the mathematical treatment of the acquired data, and the validation of the correlation of the odour metric are key topics to control in order to ensure a robust and reliable measurement. The research presents and discusses the use of different pattern recognition and feature extraction techniques in the elaboration and effectiveness of the odour classification monitoring model (OCMM). The effect of the rise, intermediate, and peak period from the original response curve, in collaboration with Linear Discriminant Analysis (LDA) and Artificial Neural Networks (ANN) as a pattern recognition algorithm, were investigated. Laboratory analyses were performed with real odour samples collected in a complex industrial plant, using an advanced smart IOMS. The results demonstrate the influence of the choice of method on the quality of the OCMM produced. The peak period in combination with the Artificial Neural Network (ANN) highlighted the best combination on the basis of high classification rates. The paper provides information to develop a solution to optimize the performance of IOMS.


2003 ◽  
Vol 15 (3) ◽  
pp. 278-285
Author(s):  
Daigo Misaki ◽  
◽  
Shigeru Aomura ◽  
Noriyuki Aoyama

We discuss effective pattern recognition for contour images by hierarchical feature extraction. When pattern recognition is done for an unlimited object, it is effective to see the object in a perspective manner at the beginning and next to see in detail. General features are used for rough classification and local features are used for a more detailed classification. D-P matching is applied for classification of a typical contour image of individual class, which contains selected points called ""landmark""s, and rough classification is done. Features between these landmarks are analyzed and used as input data of neural networks for more detailed classification. We apply this to an illustrated referenced book of insects in which much information is classified hierarchically to verify the proposed method. By introducing landmarks, a neural network can be used effectively for pattern recognition of contour images.


Author(s):  
David Zhang ◽  
Fengxi Song ◽  
Yong Xu ◽  
Zhizhen Liang

This chapter is a brief introduction to biometric discriminant analysis technologies — Section I of the book. Section 2.1 describes two kinds of linear discriminant analysis (LDA) approaches: classification-oriented LDA and feature extraction-oriented LDA. Section 2.2 discusses LDA for solving the small sample size (SSS) pattern recognition problems. Section 2.3 shows the organization of Section I.


2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Zhangjing Yang ◽  
Chuancai Liu ◽  
Pu Huang ◽  
Jianjun Qian

In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzyk-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.


2014 ◽  
Vol 937 ◽  
pp. 351-356 ◽  
Author(s):  
Shi Yin Qiu ◽  
Rui Bo Yuan

The wavelet packet decomposition can be used to extract the frequency band containing bearing fault feature, because the fault signal can be decomposed into different frequency bands by using the wavelet packet decomposition, that is to say the optimal wavelet packet decomposition node needs to be found. A method applying the average Euclidean distance to find the optimal wavelet packet decomposition node was presented. First of all, the bearing fault signals were decomposed into three layers wavelet coefficients by which the bearing fault signals were reconstructed. The peak values extracted from the reconstructing signal spectrum constructed a feature space. Then, the minimum average Euclidean distance calculated from the feature space indicated the optimal wavelet packet node. The optimal feature space could be constructed by the feature points extracted from the signals reconstructed by the optimal wavelet packet nodes. Finally, the optimal feature space was used for the K-means clustering. The feature extraction and pattern recognition test of the four kinds of bearing conditions under four kinds of rotation speeds was detailed. The test results show this method, which can extract the bearing fault feature efficiently and make the fault feature space have the lowest within-class scatter, wons a high pattern recognition accuracy.


2011 ◽  
Vol 179-180 ◽  
pp. 1254-1259
Author(s):  
Yu Rong Lin ◽  
Qiang Wang

Several orthogonal feature extraction algorithms based on local preserving projection have recently been proposed. However, these methods don’t address the singularity problem in the high dimensional feature space,which means that the eigen-equation of orthogonal feature extraction algorithms cannot be solved directly. In this paper, we present a new method called Direct Orthogonal Neighborhood Preserving Discriminant Analysis (DONPDA), which is able to extract all the orthogonal discriminant vectors simultaneously in the high-dimensional feature space and does not suffer the singularity problem. Experimental results on ORL database indicate that the proposed DONPDA method achieves higher recognition rate than the ONPDA method and other some existing orthogonal feature extraction algorithms.


Computation ◽  
2019 ◽  
Vol 7 (3) ◽  
pp. 39 ◽  
Author(s):  
Laura Sani ◽  
Riccardo Pecori ◽  
Monica Mordonini ◽  
Stefano Cagnoni

The so-called Relevance Index (RI) metrics are a set of recently-introduced indicators based on information theory principles that can be used to analyze complex systems by detecting the main interacting structures within them. Such structures can be described as subsets of the variables which describe the system status that are strongly statistically correlated with one another and mostly independent of the rest of the system. The goal of the work described in this paper is to apply the same principles to pattern recognition and check whether the RI metrics can also identify, in a high-dimensional feature space, attribute subsets from which it is possible to build new features which can be effectively used for classification. Preliminary results indicating that this is possible have been obtained using the RI metrics in a supervised way, i.e., by separately applying such metrics to homogeneous datasets comprising data instances which all belong to the same class, and iterating the procedure over all possible classes taken into consideration. In this work, we checked whether this would also be possible in a totally unsupervised way, i.e., by considering all data available at the same time, independently of the class to which they belong, under the hypothesis that the peculiarities of the variable sets that the RI metrics can identify correspond to the peculiarities by which data belonging to a certain class are distinguishable from data belonging to different classes. The results we obtained in experiments made with some publicly available real-world datasets show that, especially when coupled to tree-based classifiers, the performance of an RI metrics-based unsupervised feature extraction method can be comparable to or better than other classical supervised or unsupervised feature selection or extraction methods.


Sign in / Sign up

Export Citation Format

Share Document