Electrocardiogram Beat Classification Using BAT-Optimized Fuzzy KNN Classifier

Author(s):  
Atul Kumar Verma ◽  
Indu Saini ◽  
Barjinder Singh Saini

In this chapter, the BAT-optimized fuzzy k-nearest neighbor (FKNN-BAT) algorithm is proposed for discrimination of the electrocardiogram (ECG) beats. The five types of beats (i.e., normal [N], right bundle block branch [RBBB], left bundle block branch [LBBB], atrial premature contraction [APC], and premature ventricular contraction [PVC]) are taken from MIT-BIH arrhythmia database for the experimentation. Thereafter, the features are extracted from five type of beats and fed to the proposed BAT-tuned fuzzy KNN classifier. The proposed classifier achieves the overall accuracy of 99.88%.

Author(s):  
Amal A. Moustafa ◽  
Ahmed Elnakib ◽  
Nihal F. F. Areed

This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieves the best Rank-1 recognition rate of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods.


2020 ◽  
Author(s):  
aras Masood Ismael ◽  
Ömer F Alçin ◽  
Karmand H Abdalla ◽  
Abdulkadir k sengur

Abstract In this paper, a novel approach that is based on two-stepped majority voting is proposed for efficient EEG based emotion classification. Emotion recognition is important for human-machine interactions. Facial-features and body-gestures based approaches have been generally proposed for emotion recognition. Recently, EEG based approaches become more popular in emotion recognition. In the proposed approach, the raw EEG signals are initially low-pass filtered for noise removal and band-pass filters are used for rhythms extraction. For each rhythm, the best performed EEG channels are determined based on wavelet-based entropy features and fractal dimension based features. The k-nearest neighbor (KNN) classifier is used in classification. The best five EEG channels are used in majority voting for getting the final predictions for each EEG rhythm. In the second majority voting step, the predictions from all rhythms are used to get a final prediction. The DEAP dataset is used in experiments and classification accuracy, sensitivity and specificity are used for performance evaluation metrics. The experiments are carried out to classify the emotions into two binary classes such as high valence (HV) vs low valence (LV) and high arousal (HA) vs low arousal (LA). The experiments show that 86.3% HV vs LV discrimination accuracy and 85.0% HA vs LA discrimination accuracy is obtained. The obtained results are also compared with some of the existing methods. The comparisons show that the proposed method has potential in the use of EEG based emotion classification.


2016 ◽  
Vol 13 (5) ◽  
Author(s):  
Malik Yousef ◽  
Waleed Khalifa ◽  
Loai AbdAllah

SummaryThe performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.


Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 290 ◽  
Author(s):  
Xiong Gan ◽  
Hong Lu ◽  
Guangyou Yang

This paper proposes a new method named composite multiscale fluctuation dispersion entropy (CMFDE), which measures the complexity of time series under different scale factors and synthesizes the information of multiple coarse-grained sequences. A simulation validates that CMFDE could improve the stability of entropy estimation. Meanwhile, a fault recognition method for rolling bearings based on CMFDE, the minimum redundancy maximum relevancy (mRMR) method, and the k nearest neighbor (kNN) classifier (CMFDE-mRMR-kNN) is developed. For the CMFDE-mRMR-kNN method, the CMFDE method is introduced to extract the fault characteristics of the rolling bearings. Then, the sensitive features are obtained by utilizing the mRMR method. Finally, the kNN classifier is used to recognize the different conditions of the rolling bearings. The effectiveness of the proposed CMFDE-mRMR-kNN method is verified by analyzing the standard experimental dataset. The experimental results show that the proposed fault diagnosis method can effectively classify the conditions of rolling bearings.


Foods ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 38 ◽  
Author(s):  
Xiaohong Wu ◽  
Jin Zhu ◽  
Bin Wu ◽  
Chao Zhao ◽  
Jun Sun ◽  
...  

The detection of liquor quality is an important process in the liquor industry, and the quality of Chinese liquors is partly determined by the aromas of the liquors. The electronic nose (e-nose) refers to an artificial olfactory technology. The e-nose system can quickly detect different types of Chinese liquors according to their aromas. In this study, an e-nose system was designed to identify six types of Chinese liquors, and a novel feature extraction algorithm, called fuzzy discriminant principal component analysis (FDPCA), was developed for feature extraction from e-nose signals by combining discriminant principal component analysis (DPCA) and fuzzy set theory. In addition, principal component analysis (PCA), DPCA, K-nearest neighbor (KNN) classifier, leave-one-out (LOO) strategy and k-fold cross-validation (k = 5, 10, 20, 25) were employed in the e-nose system. The maximum classification accuracy of feature extraction for Chinese liquors was 98.378% using FDPCA, showing this algorithm to be extremely effective. The experimental results indicate that an e-nose system coupled with FDPCA is a feasible method for classifying Chinese liquors.


Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 413 ◽  
Author(s):  
Chris Lytridis ◽  
Anna Lekova ◽  
Christos Bazinas ◽  
Michail Manios ◽  
Vassilis G. Kaburlasos

Our interest is in time series classification regarding cyber–physical systems (CPSs) with emphasis in human-robot interaction. We propose an extension of the k nearest neighbor (kNN) classifier to time-series classification using intervals’ numbers (INs). More specifically, we partition a time-series into windows of equal length and from each window data we induce a distribution which is represented by an IN. This preserves the time dimension in the representation. All-order data statistics, represented by an IN, are employed implicitly as features; moreover, parametric non-linearities are introduced in order to tune the geometrical relationship (i.e., the distance) between signals and consequently tune classification performance. In conclusion, we introduce the windowed IN kNN (WINkNN) classifier whose application is demonstrated comparatively in two benchmark datasets regarding, first, electroencephalography (EEG) signals and, second, audio signals. The results by WINkNN are superior in both problems; in addition, no ad-hoc data preprocessing is required. Potential future work is discussed.


2021 ◽  
Vol 11 (1) ◽  
pp. 7-19
Author(s):  
Ibrahima Bah

Machine Learning, a branch of artificial intelligence, has become more accurate than human medical professionals in predicting the incidence of heart attack or death in patients at risk of coronary artery disease. In this paper, we attempt to employ Artificial Intelligence (AI) to predict heart attack. For this purpose, we employed the popular classification technique named the K-Nearest Neighbor (KNN) algorithm to predict the probability of having the Heart Attack (HA). The dataset used is the cardiovascular dataset available publicly on Kaggle, knowing that someone suffering from cardiovascular disease is likely to succumb to a heart attack. In this work, the research was conducted using two approaches. We use the KNN classifier for the first time, aided by using a correlation matrix to select the best features manually and faster computation, and then optimize the parameters with the K-fold cross-validation technique. This improvement led us to have an accuracy of 72.37% on the test set.


Energies ◽  
2019 ◽  
Vol 12 (8) ◽  
pp. 1472 ◽  
Author(s):  
Thang Bui Quy ◽  
Sohaib Muhammad ◽  
Jong-Myon Kim

This paper proposes a reliable leak detection method for water pipelines under different operating conditions. This approach segments acoustic emission (AE) signals into short frames based on the Hanning window, with an overlap of 50%. After segmentation from each frame, an intermediate quantity, which contains the symptoms of a leak and keeps its characteristic adequately stable even when the environmental conditions change, is calculated. Finally, a k-nearest neighbor (KNN) classifier is trained using features extracted from the transformed signals to identify leaks in the pipeline. Experiments are conducted under different conditions to confirm the effectiveness of the proposed method. The results of the study indicate that this method offers better quality and more reliability than using features extracted directly from the AE signals to train the KNN classifier. Moreover, the proposed method requires less training data than existing techniques. The transformation method is highly accurate and works well even when only a small amount of data is used to train the classifier, whereas the direct AE-based method returns misclassifications in some cases. In addition, robustness is also tested by adding Gaussian noise to the AE signals. The proposed method is more resistant to noise than the direct AE-based method.


2012 ◽  
Vol 532-533 ◽  
pp. 1455-1459
Author(s):  
Xiang Dong Li ◽  
Han Jia ◽  
Li Huang

K Nearest Neighbor (kNN) is a commonly-used text categorization algorithm. Previous studies mainly focused on improvements of the algorithm by modifying feature selection and k value selection. This research investigates the possibility to use Jensen-Shannon Divergence as similarity measure in the kNN classifier, and compares the performance, in terms of classification accuracy. The experiment denotes that the kNN algorithm based on Jensen-Shannon Divergence outperforms that based on Cosine value, while the performance is also largely dependent on number of categories and number of documents in a category.


Sign in / Sign up

Export Citation Format

Share Document