ECG signal classification based on temporal and spectral features using SVM classifier

2017 ◽  
Vol 10 (11) ◽  
pp. 4116
Author(s):  
A. Mohamed Syed Ali
2021 ◽  
Vol 105 ◽  
pp. 282-290
Author(s):  
Vijay Anant Athavale ◽  
Suresh Chand Gupta ◽  
Deepak Kumar ◽  
Savita

In this paper, a pre-trained CNN model VGG16 with the SVM classifier is presented for the HAR task. The deep features are learned via the VGG16 pre-trained CNN model. The VGG 16 network is previously used for the image classification task. We used VGG16 for the signal classification of human activity, which is recorded by the accelerometer sensor of the mobile phone. The UniMiB dataset contains the 11771 samples of the daily life activity of humans. A Smartphone records these samples through the accelerometer sensor. The features are learned via the fifth max-pooling layer of the VGG16 CNN model and feed to the SVM classifier. The SVM classifier replaced the fully connected layer of the VGG16 model. The proposed VGG16-SVM model achieves effective and efficient results. The proposed method of VGG16-SVM is compared with the previously used schemes. The classification accuracy and F-Score are the evaluation parameters, and the proposed method provided 79.55% accuracy and 71.63% F-Score.


2020 ◽  
Author(s):  
Thamba Meshach W ◽  
Hemajothi S ◽  
Mary Anita E A

Abstract Human affect recognition (HAR) using images of facial expression and electrocardiogram (ECG) signal plays an important role in predicting human intention. This system improves the performance of the system in applications like the security system, learning technologies and health care systems. The primary goal of our work is to recognize individual affect states automatically using the multilayered binary structured support vector machine (MBSVM), which efficiently classify the input into one of the four affect classes, relax, happy, sad and angry. The classification is performed efficiently by designing an efficient support vector machine (SVM) classifier in multilayer mode operation. The classifier is trained using the 8-fold cross-validation method, which improves the learning of the classifier, thus increasing its efficiency. The classification and recognition accuracy is enhanced and also overcomes the drawback of ‘facial mimicry’ by using hybrid features that are extracted from both facial images (visual elements) and physiological signal ECG (signal features). The reliability of the input database is improved by acquiring the face images and ECG signals experimentally and by inducing emotions through image stimuli. The performance of the affect recognition system is evaluated using the confusion matrix, obtaining the classification accuracy of 96.88%.


Sign in / Sign up

Export Citation Format

Share Document