Feature selection of the armature winding broken coils in synchronous motor using genetic algorithm and mahalanobis distance

2012 ◽  
Vol 57 (3) ◽  
pp. 829-835 ◽  
Author(s):  
Z. Głowacz ◽  
J. Kozik

The paper describes a procedure for automatic selection of symptoms accompanying the break in the synchronous motor armature winding coils. This procedure, called the feature selection, leads to choosing from a full set of features describing the problem, such a subset that would allow the best distinguishing between healthy and damaged states. As the features the spectra components amplitudes of the motor current signals were used. The full spectra of current signals are considered as the multidimensional feature spaces and their subspaces are tested. Particular subspaces are chosen with the aid of genetic algorithm and their goodness is tested using Mahalanobis distance measure. The algorithm searches for such a subspaces for which this distance is the greatest. The algorithm is very efficient and, as it was confirmed by research, leads to good results. The proposed technique is successfully applied in many other fields of science and technology, including medical diagnostics.

2011 ◽  
Author(s):  
Ahmed Kharrat ◽  
Nacéra Benamrane ◽  
Mohamed B. Messaoud ◽  
Mohamed Abid

2019 ◽  
Vol 104 (1-4) ◽  
pp. 1051-1063 ◽  
Author(s):  
Xiaoping Liao ◽  
Gang Zhou ◽  
Zhenkun Zhang ◽  
Juan Lu ◽  
Junyan Ma

DYNA ◽  
2015 ◽  
Vol 82 (191) ◽  
pp. 11-19 ◽  
Author(s):  
Juan Aguarón-Joven ◽  
María Teresa Escobar-Urmeneta ◽  
Jorge Luis García-Alcaraz ◽  
José María Moreno-Jiménez ◽  
Alberto Vega-Bonilla

Vega et al. [1] analyzed the influence of the attributes’ dependence when ranking a set of alternatives in a multicriteria decision making problem with TOPSIS. They also proposed the use of the Mahalanobis distance to incorporate the correlations among the attributes in TOPSIS. Even in those situations for which dependence among attributes is very slight, the results obtained for the Mahalanobis distance are significantly different from those obtained with the Euclidean distance, traditionally used in TOPSIS, and also from results obtained using any other distance of the Minkowsky family. This raises serious doubts regarding the selection of the distance that should be employed in each case. To deal with the problem of the attributes’ dependence and the question of the selection of the most appropriate distance measure, this paper proposes to use a new method for synthesizing the distances to the ideal and the anti-ideal in TOPSIS. The new procedure is based on the Analytic Hierarchy Process and is able to capture the relative importance of both distances in the context given by the measure that is considered; it also provides rankings, which are closer to the distances employed in TOPSIS, regardless of the dependence among the attributes. The new technique has been applied to the illustrative example employed in Vega et al. [1].


Author(s):  
Pooja Rani ◽  
Rajneesh Kumar ◽  
Anurag Jain ◽  
Sunil Kumar Chawla

Machine learning has become an integral part of our life in today's world. Machine learning when applied to real-world applications suffers from the problem of high dimensional data. Data can have unnecessary and redundant features. These unnecessary features affect the performance of classification systems used in prediction. Selection of important features is the first step in developing any decision support system. In this paper, the authors have proposed a hybrid feature selection method GARFE by integrating GA (genetic algorithm) and RFE (recursive feature elimination) algorithms. Efficiency of proposed method is analyzed using support vector machine classifier on the scale of accuracy, sensitivity, specificity, precision, F-measure, and execution time parameters. Proposed GARFE method is also compared to eight other feature selection methods. Results demonstrate that the proposed GARFE method has increased the performance of classification systems by removing irrelevant and redundant features.


2012 ◽  
Vol 3 (3) ◽  
pp. 359-364
Author(s):  
Manish Rai ◽  
Rekha Pandit ◽  
Vineet Richhariya

Multi-class miner resolves the problem of feature evaluation, data drift and concept evaluation of stream data classification. The process of stream data classification in multi-class miner based on ensemble technique of clustering and classification on feature evaluation technique. The process of feature evaluation technique faced a problem of correct point selection of cluster centre for the process of data grouping. For the proper selection of features point we used optimization technique for feature selection process. The feature selection process based on advance genetic algorithm (AGA). The advance genetic algorithm poses a process of feature point for neighbour class detection for finding a correct point in classification. Our proposed algorithm tested on some well know data set provided by UCI machine learning repository. Our empirical evaluation result shows that better result in comparison of multi-class miner for stream data classification.


2019 ◽  
Vol 24 (2) ◽  
pp. 119-127
Author(s):  
Tariq Ali ◽  
Asif Nawaz ◽  
Hafiza Ayesha Sadia

Abstract High dimensionality is a well-known problem that has a huge number of highlights in the data, yet none is helpful for a particular data mining task undertaking, for example, classification and grouping. Therefore, selection of features is used frequently to reduce the data set dimensionality. Feature selection is a multi-target errand, which diminishes dataset dimensionality, decreases the running time, and furthermore enhances the expected precision. In the study, our goal is to diminish the quantity of features of electroencephalography data for eye state classification and achieve the same or even better classification accuracy with the least number of features. We propose a genetic algorithm-based feature selection technique with the KNN classifier. The accuracy is improved with the selected feature subset using the proposed technique as compared to the full feature set. Results prove that the classification precision of the proposed strategy is enhanced by 3 % on average when contrasted with the accuracy without feature selection.


Sign in / Sign up

Export Citation Format

Share Document