Wavelet-Based Emotion Recognition Using Single Channel EEG Device

Author(s):  
Tie Hua Zhou ◽  
Wen Long Liang ◽  
Hang Yu Liu ◽  
Wei Jian Pu ◽  
Ling Wang
2021 ◽  
Vol 11 (3) ◽  
pp. 1338
Author(s):  
Ling Wang ◽  
Hangyu Liu ◽  
Tiehua Zhou ◽  
Wenlong Liang ◽  
Minglei Shan

Electroencephalogram (EEG) as biomedical signal is widely applied in the medical field such as the detection of Alzheimer’s disease, Parkinson’s disease, etc. Moreover, by analyzing the EEG-based emotions, the mental status of individual can be revealed for further analysis on the psychological causes of some diseases such as cancer, which is considered as a vital factor on the induction of certain diseases. Therefore, once the emotional status can be correctly analyzed based on EEG signal, more healthcare-oriented applications can be furtherly carried out. Currently, in order to achieve efficiency and accuracy, diverse amounts of EEG-based emotions recognition methods generally extract features by analyzing the overall characteristics of signal, along with optimization strategy of channel selection to minimize the information redundancy. Those methods have been proved their effectiveness, however, there still remains a big challenge when applied with single channel information for emotion recognition task. Therefore, in order to recognize multidimensional emotions based on single channel information, an emotion quantification analysis (EQA) method is proposed to objectively analyze the semantically similarity between emotions in valence-arousal domains, and a multidimensional emotion recognition (EMER) model is proposed on recognizing multidimensional emotions according to the partial fluctuation pattern (PFP) features based on single channel information, and result shows that even though semantically similar emotions are proved to have similar change patterns in EEG signals, each single channel of 4 frequency bands can efficiently recognize 20 different emotions with an average accuracy above 93% separately.


2021 ◽  
Vol 38 (6) ◽  
pp. 1689-1698
Author(s):  
Suat Toraman ◽  
Ömer Osman Dursun

Human emotion recognition with machine learning methods through electroencephalographic (EEG) signals has become a highly interesting subject for researchers. Although it is simple to define emotions that can be expressed physically such as speech, facial expressions, and gestures, it is more difficult to define psychological emotions that are expressed internally. The most important stimuli in revealing inner emotions are aural and visual stimuli. In this study, EEG signals using both aural and visual stimuli were examined and emotions were evaluated in both binary and multi-class emotion recognitions models. A general emotion recognition model was proposed for non-subject-based classification. Unlike in previous studies, a subject-based testing was performed for the first time in the literature. Capsule Networks, a new neural network model, has been developed for binary and multi-class emotion recognition. In the proposed method, a novel fusion strategy was introduced for binary-class emotion recognition and the model was tested using the GAMEEMO dataset. Binary-class emotion recognition achieved a classification accuracy which was 10% better than the classification performance achieved in other studies in the literature. Based on these findings, we suggest that the proposed method will bring a different perspective to emotion recognition.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5218 ◽  
Author(s):  
Muhammad Adeel Asghar ◽  
Muhammad Jamil Khan ◽  
Fawad ◽  
Yasar Amin ◽  
Muhammad Rizwan ◽  
...  

Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.


Author(s):  
P. Trebbia ◽  
P. Ballongue ◽  
C. Colliex

An effective use of electron energy loss spectroscopy for chemical characterization of selected areas in the electron microscope can only be achieved with the development of quantitative measurements capabilities.The experimental assembly, which is sketched in Fig.l, has therefore been carried out. It comprises four main elements.The analytical transmission electron microscope is a conventional microscope fitted with a Castaing and Henry dispersive unit (magnetic prism and electrostatic mirror). Recent modifications include the improvement of the vacuum in the specimen chamber (below 10-6 torr) and the adaptation of a new electrostatic mirror.The detection system, similar to the one described by Hermann et al (1), is located in a separate chamber below the fluorescent screen which visualizes the energy loss spectrum. Variable apertures select the electrons, which have lost an energy AE within an energy window smaller than 1 eV, in front of a surface barrier solid state detector RTC BPY 52 100 S.Q. The saw tooth signal delivered by a charge sensitive preamplifier (decay time of 5.10-5 S) is amplified, shaped into a gaussian profile through an active filter and counted by a single channel analyser.


1968 ◽  
Vol 11 (1) ◽  
pp. 189-193 ◽  
Author(s):  
Lois Joan Sanders

A tongue pressure unit for measurement of lingual strength and patterns of tongue pressure is described. It consists of a force displacement transducer, a single channel, direct writing recording system, and a specially designed tongue pressure disk, head stabilizer, and pressure unit holder. Calibration with known weights indicated an essentially linear and consistent response. An evaluation of subject reliability in which 17 young adults were tested on two occasions revealed no significant difference in maximum pressure exerted during the two test trials. Suggestions for clinical and research use of the instrumentation are noted.


2013 ◽  
Vol 61 (1) ◽  
pp. 7-15 ◽  
Author(s):  
Daniel Dittrich ◽  
Gregor Domes ◽  
Susi Loebel ◽  
Christoph Berger ◽  
Carsten Spitzer ◽  
...  

Die vorliegende Studie untersucht die Hypothese eines mit Alexithymie assoziierten Defizits beim Erkennen emotionaler Gesichtsaudrücke an einer klinischen Population. Darüber hinaus werden Hypothesen zur Bedeutung spezifischer Emotionsqualitäten sowie zu Gender-Unterschieden getestet. 68 ambulante und stationäre psychiatrische Patienten (44 Frauen und 24 Männer) wurden mit der Toronto-Alexithymie-Skala (TAS-20), der Montgomery-Åsberg Depression Scale (MADRS), der Symptom-Check-List (SCL-90-R) und der Emotional Expression Multimorph Task (EEMT) untersucht. Als Stimuli des Gesichtererkennungsparadigmas dienten Gesichtsausdrücke von Basisemotionen nach Ekman und Friesen, die zu Sequenzen mit sich graduell steigernder Ausdrucksstärke angeordnet waren. Mittels multipler Regressionsanalyse untersuchten wir die Assoziation von TAS-20 Punktzahl und facial emotion recognition (FER). Während sich für die Gesamtstichprobe und den männlichen Stichprobenteil kein signifikanter Zusammenhang zwischen TAS-20-Punktzahl und FER zeigte, sahen wir im weiblichen Stichprobenteil durch die TAS-20 Punktzahl eine signifikante Prädiktion der Gesamtfehlerzahl (β = .38, t = 2.055, p < 0.05) und den Fehlern im Erkennen der Emotionen Wut und Ekel (Wut: β = .40, t = 2.240, p < 0.05, Ekel: β = .41, t = 2.214, p < 0.05). Für wütende Gesichter betrug die Varianzaufklärung durch die TAS-20-Punktzahl 13.3 %, für angeekelte Gesichter 19.7 %. Kein Zusammenhang bestand zwischen der Zeit, nach der die Probanden die emotionalen Sequenzen stoppten, um ihre Bewertung abzugeben (Antwortlatenz) und Alexithymie. Die Ergebnisse der Arbeit unterstützen das Vorliegen eines mit Alexithymie assoziierten Defizits im Erkennen emotionaler Gesichtsausdrücke bei weiblchen Probanden in einer heterogenen, klinischen Stichprobe. Dieses Defizit könnte die Schwierigkeiten Hochalexithymer im Bereich sozialer Interaktionen zumindest teilweise begründen und so eine Prädisposition für psychische sowie psychosomatische Erkrankungen erklären.


Sign in / Sign up

Export Citation Format

Share Document