scholarly journals Optimized Mahalanobis–Taguchi System for High-Dimensional Small Sample Data Classification

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Xinping Xiao ◽  
Dian Fu ◽  
Yu Shi ◽  
Jianghui Wen

The Mahalanobis–Taguchi system (MTS) is a multivariate data diagnosis and prediction technology, which is widely used to optimize large sample data or unbalanced data, but it is rarely used for high-dimensional small sample data. In this paper, the optimized MTS for the classification of high-dimensional small sample data is discussed from two aspects, namely, the inverse matrix instability of the covariance matrix and the instability of feature selection. Firstly, based on regularization and smoothing techniques, this paper proposes a modified Mahalanobis metric to calculate the Mahalanobis distance, which is aimed at reducing the influence of the inverse matrix instability under small sample conditions. Secondly, the minimum redundancy-maximum relevance (mRMR) algorithm is introduced into the MTS for the instability problem of feature selection. By using the mRMR algorithm and signal-to-noise ratio (SNR), a two-stage feature selection method is proposed: the mRMR algorithm is first used to remove noise and redundant variables; the orthogonal table and SNR are then used to screen the combination of variables that make great contribution to classification. Then, the feasibility and simplicity of the optimized MTS are shown in five datasets from the UCI database. The Mahalanobis distance based on regularization and smoothing techniques (RS-MD) is more robust than the traditional Mahalanobis distance. The two-stage feature selection method improves the effectiveness of feature selection for MTS. Finally, the optimized MTS is applied to email classification of the Spambase dataset. The results show that the optimized MTS outperforms the classical MTS and the other 3 machine learning algorithms.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Chengyuan Huang

With the rapid development of artificial intelligence in recent years, the research on image processing, text mining, and genome informatics has gradually deepened, and the mining of large-scale databases has begun to receive more and more attention. The objects of data mining have also become more complex, and the data dimensions of mining objects have become higher and higher. Compared with the ultra-high data dimensions, the number of samples available for analysis is too small, resulting in the production of high-dimensional small sample data. High-dimensional small sample data will bring serious dimensional disasters to the mining process. Through feature selection, redundancy and noise features in high-dimensional small sample data can be effectively eliminated, avoiding dimensional disasters and improving the actual efficiency of mining algorithms. However, the existing feature selection methods emphasize the classification or clustering performance of the feature selection results and ignore the stability of the feature selection results, which will lead to unstable feature selection results, and it is difficult to obtain real and understandable features. Based on the traditional feature selection method, this paper proposes an ensemble feature selection method, Random Bits Forest Recursive Clustering Eliminate (RBF-RCE) feature selection method, combined with multiple sets of basic classifiers to carry out parallel learning and screen out the best feature classification results, optimizes the classification performance of traditional feature selection methods, and can also improve the stability of feature selection. Then, this paper analyzes the reasons for the instability of feature selection and introduces a feature selection stability measurement method, the Intersection Measurement (IM), to evaluate whether the feature selection process is stable. The effectiveness of the proposed method is verified by experiments on several groups of high-dimensional small sample data sets.


2019 ◽  
Vol 9 (7) ◽  
pp. 1516-1523 ◽  
Author(s):  
Jinke Wang ◽  
Congcong Zhao ◽  
Changfa Shi ◽  
Shinichi Tamura ◽  
Noriyoki Tomiyama

Author(s):  
Nor Idayu Mahat ◽  
Maz Jamilah Masnan ◽  
Ali Yeon Md Shakaff ◽  
Ammar Zakaria ◽  
Muhd Khairulzaman Abdul Kadir

This chapter overviews the issue of multicollinearity in electronic nose (e-nose) classification and investigates some analytical solutions to deal with the problem. Multicollinearity effect may harm classification analysis from producing good parameters estimate during the construction of the classification rule. The common approach to deal with multicollinearity is feature extraction. However, the criterion used in extracting the raw features based on variances may not be appropriate for the ultimate goal of classification accuracy. Alternatively, feature selection method would be advisable as it chooses only valuable features. Two distance-based criteria in determining the right features for classification purposes, Wilk's Lambda and bounded Mahalanobis distance, are applied. Classification with features determined by bounded Mahalanobis distance statistically performs better than Wilk's Lambda. This chapter suggests that classification of e-nose with feature selection is a good choice to limit the cost of experiments and maintain good classification performance.


Sign in / Sign up

Export Citation Format

Share Document