scholarly journals DEEPHER: Human Emotion Recognition Using an EEG-Based DEEP Learning Network Model

2021 ◽  
Vol 10 (1) ◽  
pp. 32
Author(s):  
Akhilesh Kumar ◽  
Awadhesh Kumar

Emotion identification and categorization have been emerging in the brain machine interface in the current era. Audio, visual, and electroencephalography (EEG) data have all been shown to be useful for automated emotion identification in a number of studies. EEG-based emotion detection is a critical component of psychiatric health assessment for individuals. If EEG sensor data are collected from multiple experimental sessions or participants, the underlying signals are invariably non-stationary. As EEG signals are noisy, non-stationary, and non-linear, creating an intelligent system that can identify emotions with good accuracy is challenging. Many researchers have shown evidence that EEG brain waves may be used to determine feelings. This study introduces a novel automated emotion identification system that employs deep learning principles to recognize emotions through EEG signals from computer games. EEG data were obtained from 28 distinct participants using 14-channel Emotive Epoc+ portable and wearable EEG equipment. Participants played four distinct emotional computer games for five minutes each, with a total of 20 min of EEG data available for each participant. The suggested framework is simple enough to categorize four classes of emotions during game play. The results demonstrate that the suggested model-based emotion detection framework is a viable method for recognizing emotions from EEG data. The network achieves 99.99% accuracyalong with less computational time.

2021 ◽  
Vol 11 (21) ◽  
pp. 9948
Author(s):  
Amira Echtioui ◽  
Ayoub Mlaouah ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  
...  

Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording.


2020 ◽  
Vol 32 (4) ◽  
pp. 731-737
Author(s):  
Akinari Onishi ◽  
◽  

Brain-computer interface (BCI) enables us to interact with the external world via electroencephalography (EEG) signals. Recently, deep learning methods have been applied to the BCI to reduce the time required for recording training data. However, more evidence is required due to lack of comparison. To reveal more evidence, this study proposed a deep learning method named time-wise convolutional neural network (TWCNN), which was applied to a BCI dataset. In the evaluation, EEG data from a subject was classified utilizing previously recorded EEG data from other subjects. As a result, TWCNN showed the highest accuracy, which was significantly higher than the typically used classifier. The results suggest that the deep learning method may be useful to reduce the recording time of training data.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5035 ◽  
Author(s):  
Son ◽  
Jeong ◽  
Lee

When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance.


Author(s):  
Ahmed Fares ◽  
Sheng-hua Zhong ◽  
Jianmin Jiang

Abstract Background As a physiological signal, EEG data cannot be subjectively changed or hidden. Compared with other physiological signals, EEG signals are directly related to human cortical activities with excellent temporal resolution. After the rapid development of machine learning and artificial intelligence, the analysis and calculation of EEGs has made great progress, leading to a significant boost in performances for content understanding and pattern recognition of brain activities across the areas of both neural science and computer vision. While such an enormous advance has attracted wide range of interests among relevant research communities, EEG-based classification of brain activities evoked by images still demands efforts for further improvement with respect to its accuracy, generalization, and interpretation, yet some characters of human brains have been relatively unexplored. Methods We propose a region-level stacked bi-directional deep learning framework for EEG-based image classification. Inspired by the hemispheric lateralization of human brains, we propose to extract additional information at regional level to strengthen and emphasize the differences between two hemispheres. The stacked bi-directional long short-term memories are used to capture the dynamic correlations hidden from both the past and the future to the current state in EEG sequences. Results Extensive experiments are carried out and our results demonstrate the effectiveness of our proposed framework. Compared with the existing state-of-the-arts, our framework achieves outstanding performances in EEG-based classification of brain activities evoked by images. In addition, we find that the signals of Gamma band are not only useful for achieving good performances for EEG-based image classification, but also play a significant role in capturing relationships between the neural activations and the specific emotional states. Conclusions Our proposed framework provides an improved solution for the problem that, given an image used to stimulate brain activities, we should be able to identify which class the stimuli image comes from by analyzing the EEG signals. The region-level information is extracted to preserve and emphasize the hemispheric lateralization for neural functions or cognitive processes of human brains. Further, stacked bi-directional LSTMs are used to capture the dynamic correlations hidden in EEG data. Extensive experiments on standard EEG-based image classification dataset validate that our framework outperforms the existing state-of-the-arts under various contexts and experimental setups.


2019 ◽  
Vol 89 (6) ◽  
pp. 903-909 ◽  
Author(s):  
Ji-Hoon Park ◽  
Hye-Won Hwang ◽  
Jun-Ho Moon ◽  
Youngsung Yu ◽  
Hansuk Kim ◽  
...  

ABSTRACT Objective: To compare the accuracy and computational efficiency of two of the latest deep-learning algorithms for automatic identification of cephalometric landmarks. Materials and Methods: A total of 1028 cephalometric radiographic images were selected as learning data that trained You-Only-Look-Once version 3 (YOLOv3) and Single Shot Multibox Detector (SSD) methods. The number of target labeling was 80 landmarks. After the deep-learning process, the algorithms were tested using a new test data set composed of 283 images. Accuracy was determined by measuring the point-to-point error and success detection rate and was visualized by drawing scattergrams. The computational time of both algorithms was also recorded. Results: The YOLOv3 algorithm outperformed SSD in accuracy for 38 of 80 landmarks. The other 42 of 80 landmarks did not show a statistically significant difference between YOLOv3 and SSD. Error plots of YOLOv3 showed not only a smaller error range but also a more isotropic tendency. The mean computational time spent per image was 0.05 seconds and 2.89 seconds for YOLOv3 and SSD, respectively. YOLOv3 showed approximately 5% higher accuracy compared with the top benchmarks in the literature. Conclusions: Between the two latest deep-learning methods applied, YOLOv3 seemed to be more promising as a fully automated cephalometric landmark identification system for use in clinical practice.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 102
Author(s):  
Sirajdin Olagoke Adeshina ◽  
Haidi Ibrahim ◽  
Soo Siang Teoh ◽  
Seng Chun Hoo

Face detection by electronic systems has been leveraged by private and government establishments to enhance the effectiveness of a wide range of applications in our day to day activities, security, and businesses. Most face detection algorithms that can reduce the problems posed by constrained and unconstrained environmental conditions such as unbalanced illumination, weather condition, distance from the camera, and background variations, are highly computationally intensive. Therefore, they are primarily unemployable in real-time applications. This paper developed face detectors by utilizing selected Haar-like and local binary pattern features, based on their number of uses at each stage of training using MATLAB’s trainCascadeObjectDetector function. We used 2577 positive face samples and 37,206 negative samples to train Haar-like and LBP face detectors for a range of False Alarm Rate (FAR) values (i.e., 0.01, 0.05, and 0.1). However, the study shows that the Haar cascade face detector at a low stage (i.e., at six stages) for 0.1 FAR value is the most efficient when tested on a set of classroom images dataset with 100% True Positive Rate (TPR) face detection accuracy. Though, deep learning ResNet101 and ResNet50 outperformed the average performance of Haar cascade by 9.09% and 0.76% based on TPR, respectively. The simplicity and relatively low computational time used by our approach (i.e., 1.09 s) gives it an edge over deep learning (139.5 s), in online classroom applications. The TPR of the proposed algorithm is 92.71% when tested on images in the synthetic Labeled Faces in the Wild (LFW) dataset and 98.55% for images in MUCT face dataset “a”, resulting in a little improvement in average TPR over the conventional face identification system.


2021 ◽  
Author(s):  
Puja A. Chavan ◽  
Sharmishta Desai

Emotion awareness is one of the most important subjects in the field of affective computing. Using nonverbal behavioral methods such as recognition of facial expression, verbal behavioral method, recognition of speech emotion, or physiological signals-based methods such as recognition of emotions based on electroencephalogram (EEG) can predict human emotion. However, it is notable that data obtained from either nonverbal or verbal behaviors are indirect emotional signals suggesting brain activity. Unlike the nonverbal or verbal actions, EEG signals are reported directly from the human brain cortex and thus may be more effective in representing the inner emotional states of the brain. Consequently, when used to measure human emotion, the use of EEG data can be more accurate than data on behavior. For this reason, the identification of human emotion from EEG signals has become a very important research subject in current emotional brain-computer interfaces (BCIs) aimed at inferring human emotional states based on the EEG signals recorded. In this paper, a hybrid deep learning approach has proposed using CNN and a long short-term memory (LSTM) algorithm is investigated for the purpose of automatic classification of epileptic disease from EEG signals. The signals have been processed by CNN for feature extraction from runtime environment while LSTM has used for classification of entire data. Finally, system demonstrates each EEG data file as normal or epileptic disease. In this research to describes a state of art for effective epileptic disease detection prediction and classification using hybrid deep learning algorithms. This research demonstrates a collaboration of CNN and LSTM for entire classification of EEG signals in numerous existing systems.


Author(s):  
Aladdin Ayesh ◽  
Miguel Arevalillo-Herra´ez ◽  
Pablo Arnau-González

This paper investigates the possibility of identifying classes by clustering. This study includes employing Self-Organizing Maps (SOM) in identifying clusters from EEG signals that could then be mapped to emotional classes. Beginning by training varying sizes of SOM with the EEG data provided from the public dataset: DEAP. The produced graphs showing Neighbor Distance, Sample Hits, and Weight Position are examined. Following that, the ground-truth label provided in DEAP is tested, in order to identify correlations between the label and the clusters produced by the SOM. The results show that there is a potential of class discovery using SOM-based clustering. It is then concluded that by evaluating the implications of this work and the difficulties in evaluating its outcome.


Sign in / Sign up

Export Citation Format

Share Document