scholarly journals Emotion Classification through Nonlinear EEG Analysis Using Machine Learning Methods

2018 ◽  
Vol 5 (4) ◽  
pp. 135-149 ◽  
Author(s):  
Morteza Zangeneh Soroush ◽  
Keivan Maghooli ◽  
Seyed Kamaledin Setarehdan ◽  
Ali Motie Nasrabadi

Background: Emotion recognition, as a subset of affective computing, has received considerable attention in recent years. Emotions are key to human-computer interactions. Electroencephalogram (EEG) is considered a valuable physiological source of information for classifying emotions. However, it has complex and chaotic behavior. Methods: In this study, an attempt is made to extract important nonlinear features from EEGs with the aim of emotion recognition. We also take advantage of machine learning methods such as evolutionary feature selection methods and committee machines to enhance the classification performance. Classification performed concerning both arousal and valence factors. Results: Results suggest that the proposed method is successful and comparable to the previous works. A recognition rate equal to 90% achieved, and the most significant features reported. We apply the final classification scheme to 2 different databases including our recorded EEGs and a benchmark dataset to evaluate the suggested approach. Conclusion: Our findings approve of the effectiveness of using nonlinear features and a combination of classifiers. Results are also discussed from different points of view to understand brain dynamics better while emotion changes. This study reveals useful insights about emotion classification and brain-behavior related to emotion elicitation.

2017 ◽  
Author(s):  
◽  
Zeshan Peng

With the advancement of machine learning methods, audio sentiment analysis has become an active research area in recent years. For example, business organizations are interested in persuasion tactics from vocal cues and acoustic measures in speech. A typical approach is to find a set of acoustic features from audio data that can indicate or predict a customer's attitude, opinion, or emotion state. For audio signals, acoustic features have been widely used in many machine learning applications, such as music classification, language recognition, emotion recognition, and so on. For emotion recognition, previous work shows that pitch and speech rate features are important features. This thesis work focuses on determining sentiment from call center audio records, each containing a conversation between a sales representative and a customer. The sentiment of an audio record is considered positive if the conversation ended with an appointment being made, and is negative otherwise. In this project, a data processing and machine learning pipeline for this problem has been developed. It consists of three major steps: 1) an audio record is split into segments by speaker turns; 2) acoustic features are extracted from each segment; and 3) classification models are trained on the acoustic features to predict sentiment. Different set of features have been used and different machine learning methods, including classical machine learning algorithms and deep neural networks, have been implemented in the pipeline. In our deep neural network method, the feature vectors of audio segments are stacked in temporal order into a feature matrix, which is fed into deep convolution neural networks as input. Experimental results based on real data shows that acoustic features, such as Mel frequency cepstral coefficients, timbre and Chroma features, are good indicators for sentiment. Temporal information in an audio record can be captured by deep convolutional neural networks for improved prediction accuracy.


2016 ◽  
Vol 15 (01) ◽  
pp. 215-234 ◽  
Author(s):  
Changqin Quan ◽  
Fuji Ren

The research on blog emotion analysis and recognition has become increasingly important in recent years. In this study, based on the Chinese blog emotion corpus (Ren-CECps), we analyze and compare blog emotion visualization from different text levels: word, sentence, and paragraph. Then, a blog emotion visualization system is designed for practical applications. Machine learning methods are applied for the implementation of blog emotion recognition at different textual levels. Based on the emotion recognition engine, the blog emotion visualization interface is designed to provide a more intuitive display of emotions in blogs, which can detect emotion for bloggers, and capture emotional change rapidly. In addition, we evaluated the performance of sentence emotion recognition by comparing five classification algorithms under different schemas, which demonstrates the effectiveness of the Complement Naive Bayes model for sentence emotion recognition. The system can recognize multi-label emotions in blogs, which provides a richer and more detailed emotion expression.


Sign in / Sign up

Export Citation Format

Share Document