Epileptic EEG Identification via LBP Operators on Wavelet Coefficients

2018 ◽  
Vol 28 (08) ◽  
pp. 1850010 ◽  
Author(s):  
Qi Yuan ◽  
Weidong Zhou ◽  
Fangzhou Xu ◽  
Yan Leng ◽  
Dongmei Wei

The automatic identification of epileptic electroencephalogram (EEG) signals can give assistance to doctors in diagnosis of epilepsy, and provide the higher security and quality of life for people with epilepsy. Feature extraction of EEG signals determines the performance of the whole recognition system. In this paper, a novel method using the local binary pattern (LBP) based on the wavelet transform (WT) is proposed to characterize the behavior of EEG activities. First, the WT is employed for time–frequency decomposition of EEG signals. After that, the “uniform” LBP operator is carried out on the wavelet-based time–frequency representation. And the generated histogram is regarded as EEG feature vector for the quantification of the textural information of its wavelet coefficients. The LBP features coupled with the support vector machine (SVM) classifier can yield the satisfactory recognition accuracies of 98.88% for interictal and ictal EEG classification and 98.92% for normal, interictal and ictal EEG classification on the publicly available EEG dataset. Moreover, the numerical results on another large size EEG dataset demonstrate that the proposed method can also effectively detect seizure events from multi-channel raw EEG data. Compared with the standard LBP, the “uniform” LBP can obtain the much shorter histogram which greatly reduces the computational burden of classification and enables it to detect ictal EEG signals in real time.

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6300
Author(s):  
Ala Hag ◽  
Dini Handayani ◽  
Thulasyammal Pillai ◽  
Teddy Mantoro ◽  
Mun Hou Kit ◽  
...  

Exposure to mental stress for long period leads to serious accidents and health problems. To avoid negative consequences on health and safety, it is very important to detect mental stress at its early stages, i.e., when it is still limited to acute or episodic stress. In this study, we developed an experimental protocol to induce two different levels of stress by utilizing a mental arithmetic task with time pressure and negative feedback as the stressors. We assessed the levels of stress on 22 healthy subjects using frontal electroencephalogram (EEG) signals, salivary alpha-amylase level (AAL), and multiple machine learning (ML) classifiers. The EEG signals were analyzed using a fusion of functional connectivity networks estimated by the Phase Locking Value (PLV) and temporal and spectral domain features. A total of 210 different features were extracted from all domains. Only the optimum multi-domain features were used for classification. We then quantified stress levels using statistical analysis and seven ML classifiers. Our result showed that the AAL level was significantly increased (p < 0.01) under stress condition in all subjects. Likewise, the functional connectivity network demonstrated a significant decrease under stress, p < 0.05. Moreover, we achieved the highest stress classification accuracy of 93.2% using the Support Vector Machine (SVM) classifier. Other classifiers produced relatively similar results.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 187
Author(s):  
Shingchern D. You

In this paper, we study the use of EEG (Electroencephalography) to classify between concentrated and relaxed mental states. In the literature, most EEG recording systems are expensive, medical-graded devices. The expensive devices limit the availability in a consumer market. The EEG signals are obtained from a toy-grade EEG device with one channel of output data. The experiments are conducted in two runs, with 7 and 10 subjects, respectively. Each subject is asked to silently recite a five-digit number backwards given by the tester. The recorded EEG signals are converted to time-frequency representations by the software accompanying the device. A simple average is used to aggregate multiple spectral components into EEG bands, such as α, β, and γ bands. The chosen classifiers are SVM (support vector machine) and multi-layer feedforward network trained individually for each subject. Experimental results show that features, with α+β+γ bands and bandwidth 4 Hz, the average accuracy over all subjects in both runs can reach more than 80% and some subjects up to 90+% with the SVM classifier. The results suggest that a brain machine interface could be implemented based on the mental states of the user even with the use of a cheap EEG device.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 540 ◽  
Author(s):  
Qiang Guo ◽  
Xin Yu ◽  
Guoqing Ruan

Low Probability of Intercept (LPI) radar waveform recognition is not only an important branch of the electronic reconnaissance field, but also an important means to obtain non-cooperative radar information. To solve the problems of LPI radar waveform recognition rate, difficult feature extraction and large number of samples needed, an automatic classification and recognition system based on Choi-Williams distribution (CWD) and depth convolution neural network migration learning is proposed in this paper. First, the system performs CWD time-frequency transform on the LPI radar waveform to obtain a 2-D time-frequency image. Then the system preprocesses the original time-frequency image. In addition, then the system sends the pre-processed image to the pre-training model (Inception-v3 or ResNet-152) of the deep convolution network for feature extraction. Finally, the extracted features are sent to a Support Vector Machine (SVM) classifier to realize offline training and online recognition of radar waveforms. The simulation results show that the overall recognition rate of the eight LPI radar signals (LFM, BPSK, Costas, Frank, and T1–T4) of the ResNet-152-SVM system reaches 97.8%, and the overall recognition rate of the Inception-v3-SVM system reaches 96.2% when the SNR is −2 dB.


2011 ◽  
Vol 21 (04) ◽  
pp. 335-350 ◽  
Author(s):  
WEI-YEN HSU

In this study, we propose a two-stage recognition system for continuous analysis of electroencephalogram (EEG) signals. An independent component analysis (ICA) and correlation coefficient are used to automatically eliminate the electrooculography (EOG) artifacts. Based on the continuous wavelet transform (CWT) and Student's two-sample t-statistics, active segment selection then detects the location of active segment in the time-frequency domain. Next, multiresolution fractal feature vectors (MFFVs) are extracted with the proposed modified fractal dimension from wavelet data. Finally, the support vector machine (SVM) is adopted for the robust classification of MFFVs. The EEG signals are continuously analyzed in 1-s segments, and every 0.5 second moves forward to simulate asynchronous BCI works in the two-stage recognition architecture. The segment is first recognized as lifted or not in the first stage, and then is classified as left or right finger lifting at stage two if the segment is recognized as lifting in the first stage. Several statistical analyses are used to evaluate the performance of the proposed system. The results indicate that it is a promising system in the applications of asynchronous BCI work.


2013 ◽  
Vol 23 (06) ◽  
pp. 1350028 ◽  
Author(s):  
YU WANG ◽  
WEIDONG ZHOU ◽  
QI YUAN ◽  
XUELI LI ◽  
QINGFANG MENG ◽  
...  

The feature analysis of epileptic EEG is very significant in diagnosis of epilepsy. This paper introduces two nonlinear features derived from fractal geometry for epileptic EEG analysis. The features of blanket dimension and fractal intercept are extracted to characterize behavior of EEG activities, and then their discriminatory power for ictal and interictal EEGs are compared by means of statistical methods. It is found that there is significant difference of the blanket dimension and fractal intercept between interictal and ictal EEGs, and the difference of the fractal intercept feature between interictal and ictal EEGs is more noticeable than the blanket dimension feature. Furthermore, these two fractal features at multi-scales are combined with support vector machine (SVM) to achieve accuracies of 97.58% for ictal and interictal EEG classification and 97.13% for normal, ictal and interictal EEG classification.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Hasan Mahmud ◽  
Md. Kamrul Hasan ◽  
Abdullah-Al-Tariq ◽  
Md. Hasanul Kabir ◽  
M. A. Mottalib

Symbolic gestures are the hand postures with some conventionalized meanings. They are static gestures that one can perform in a very complex environment containing variations in rotation and scale without using voice. The gestures may be produced in different illumination conditions or occluding background scenarios. Any hand gesture recognition system should find enough discriminative features, such as hand-finger contextual information. However, in existing approaches, depth information of hand fingers that represents finger shapes is utilized in limited capacity to extract discriminative features of fingers. Nevertheless, if we consider finger bending information (i.e., a finger that overlaps palm), extracted from depth map, and use them as local features, static gestures varying ever so slightly can become distinguishable. Our work here corroborated this idea and we have generated depth silhouettes with variation in contrast to achieve more discriminative keypoints. This approach, in turn, improved the recognition accuracy up to 96.84%. We have applied Scale-Invariant Feature Transform (SIFT) algorithm which takes the generated depth silhouettes as input and produces robust feature descriptors as output. These features (after converting into unified dimensional feature vectors) are fed into a multiclass Support Vector Machine (SVM) classifier to measure the accuracy. We have tested our results with a standard dataset containing 10 symbolic gesture representing 10 numeric symbols (0-9). After that we have verified and compared our results among depth images, binary images, and images consisting of the hand-finger edge information generated from the same dataset. Our results show higher accuracy while applying SIFT features on depth images. Recognizing numeric symbols accurately performed through hand gestures has a huge impact on different Human-Computer Interaction (HCI) applications including augmented reality, virtual reality, and other fields.


2017 ◽  
Vol 27 (08) ◽  
pp. 1750033 ◽  
Author(s):  
Alborz Rezazadeh Sereshkeh ◽  
Robert Trott ◽  
Aurélien Bricout ◽  
Tom Chau

Brain–computer interfaces (BCIs) for communication can be nonintuitive, often requiring the performance of hand motor imagery or some other conversation-irrelevant task. In this paper, electroencephalography (EEG) was used to develop two intuitive online BCIs based solely on covert speech. The goal of the first BCI was to differentiate between 10[Formula: see text]s of mental repetitions of the word “no” and an equivalent duration of unconstrained rest. The second BCI was designed to discern between 10[Formula: see text]s each of covert repetition of the words “yes” and “no”. Twelve participants used these two BCIs to answer yes or no questions. Each participant completed four sessions, comprising two offline training sessions and two online sessions, one for testing each of the BCIs. With a support vector machine and a combination of spectral and time-frequency features, an average accuracy of [Formula: see text] was reached across participants in the online classification of no versus rest, with 10 out of 12 participants surpassing the chance level (60.0% for [Formula: see text]). The online classification of yes versus no yielded an average accuracy of [Formula: see text], with eight participants exceeding the chance level. Task-specific changes in EEG beta and gamma power in language-related brain areas tended to provide discriminatory information. To our knowledge, this is the first report of online EEG classification of covert speech. Our findings support further study of covert speech as a BCI activation task, potentially leading to the development of more intuitive BCIs for communication.


2021 ◽  
Author(s):  
Rejith K.N ◽  
Kamalraj Subramaniam ◽  
Ayyem Pillai Vasudevan Pillai ◽  
Roshini T V ◽  
Renjith V. Ravi ◽  
...  

Abstract In this work, PD patients and healthy individuals were categorized with machine-learning algorithms. EEG signals associated with six different emotions, (Happiness(E1), Sadness(E2), Fear(E3), Anger(E4), Surprise,(E5) and disgust(E6)) were used for the study. EEG data were collected from 20 PD patients and 20 normal controls using multimodal stimuli. Different features were used to categorize emotional data. Emotional recognition in Parkinson’s disease (PD) has been investigated in three domains namely, time, frequency and time frequency using Entropy, Energy-Entropy and Teager Energy-Entropy features. Three classifiers namely, K-Nearest Neighbor Algorithm, Support Vector Machine and Probabilistic Neural Network were used to observethe classification results. Emotional EEG stimuli such as anger, surprise, happiness, sadness, fear, and disgust were used to categorize PD patients and healthy controls (HC). For each EEG signal, frequency features corresponding to alpha, beta and gamma bands were obtained for nine feature extraction methods (Entropy, Energy Entropy, Teager Energy Entropy, Spectral Entropy, Spectral Energy-Entropy, Spectral Teager Energy-Entropy, STFT Entropy, STFT Energy-Entropy and STFT Teager Energy-Entropy). From the analysis, it is observed that the entropy feature in frequency domain performs evenly well (above 80 %) for all six emotions with KNN. Classification results shows that using the selected energy entropy combination feature in frequency domain provides highest accuracy for all emotions except E1 and E2 for KNN and SVM classifier, whereas other features give accuracy values of above 60% for most emotions.It is also observed that emotion E1 gives above 90 % classification accuracy for all classifiers in time domain.In frequency domain also, emotion E1 gives above 90% classification accuracy using PNN classifier.


2019 ◽  
Vol 19 (03) ◽  
pp. 1950008
Author(s):  
MONALISA MOHANTY ◽  
PRADYUT BISWAL ◽  
SUKANTA SABUT

Ventricular tachycardia (VT) and ventricular fibrillation (VF) are the life-threatening ventricular arrhythmias that require treatment in an emergency. Detection of VT and VF at an early stage is crucial for achieving the success of the defibrillation treatment. Hence an automatic system using computer-aided diagnosis tool is helpful in detecting the ventricular arrhythmias in electrocardiogram (ECG) signal. In this paper, a discrete wavelet transform (DWT) was used to denoise and decompose the ECG signals into different consecutive frequency bands to reduce noise. The methodology was tested using ECG data from standard CU ventricular tachyarrhythmia database (CUDB) and MIT-BIH malignant ventricular ectopy database (VFDB) datasets of PhysioNet databases. A set of time-frequency features consists of temporal, spectral, and statistical were extracted and ranked by the correlation attribute evaluation with ranker search method in order to improve the accuracy of detection. The ranked features were classified for VT and VF conditions using support vector machine (SVM) and decision tree (C4.5) classifier. The proposed DWT based features yielded the average sensitivity of 98%, specificity of 99.32%, and accuracy of 99.23% using a decision tree (C4.5) classifier. These results were better than the SVM classifier having an average accuracy of 92.43%. The obtained results prove that using DWT based time-frequency features with decision tree (C4.5) classifier can be one of the best choices for clinicians for precise detection of ventricular arrhythmias.


2020 ◽  
Author(s):  
Thamba Meshach W ◽  
Hemajothi S ◽  
Mary Anita E A

Abstract Human affect recognition (HAR) using images of facial expression and electrocardiogram (ECG) signal plays an important role in predicting human intention. This system improves the performance of the system in applications like the security system, learning technologies and health care systems. The primary goal of our work is to recognize individual affect states automatically using the multilayered binary structured support vector machine (MBSVM), which efficiently classify the input into one of the four affect classes, relax, happy, sad and angry. The classification is performed efficiently by designing an efficient support vector machine (SVM) classifier in multilayer mode operation. The classifier is trained using the 8-fold cross-validation method, which improves the learning of the classifier, thus increasing its efficiency. The classification and recognition accuracy is enhanced and also overcomes the drawback of ‘facial mimicry’ by using hybrid features that are extracted from both facial images (visual elements) and physiological signal ECG (signal features). The reliability of the input database is improved by acquiring the face images and ECG signals experimentally and by inducing emotions through image stimuli. The performance of the affect recognition system is evaluated using the confusion matrix, obtaining the classification accuracy of 96.88%.


Sign in / Sign up

Export Citation Format

Share Document