scholarly journals Emotion Recognition Based on EEG Using Generative Adversarial Nets and Convolutional Neural Network

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bo Pan ◽  
Wei Zheng

Emotion recognition plays an important role in the field of human-computer interaction (HCI). Automatic emotion recognition based on EEG is an important topic in brain-computer interface (BCI) applications. Currently, deep learning has been widely used in the field of EEG emotion recognition and has achieved remarkable results. However, due to the cost of data collection, most EEG datasets have only a small amount of EEG data, and the sample categories are unbalanced in these datasets. These problems will make it difficult for the deep learning model to predict the emotional state. In this paper, we propose a new sample generation method using generative adversarial networks to solve the problem of EEG sample shortage and sample category imbalance. In experiments, we explore the performance of emotion recognition with the frequency band correlation and frequency band separation computational models before and after data augmentation on standard EEG-based emotion datasets. Our experimental results show that the method of generative adversarial networks for data augmentation can effectively improve the performance of emotion recognition based on the deep learning model. And we find that the frequency band correlation deep learning model is more conducive to emotion recognition.

2021 ◽  
Vol 13 (20) ◽  
pp. 4044
Author(s):  
Étienne Clabaut ◽  
Myriam Lemelin ◽  
Mickaël Germain ◽  
Yacine Bouroubi ◽  
Tony St-Pierre

Training a deep learning model requires highly variable data to permit reasonable generalization. If the variability in the data about to be processed is low, the interest in obtaining this generalization seems limited. Yet, it could prove interesting to specialize the model with respect to a particular theme. The use of enhanced super-resolution generative adversarial networks (ERSGAN), a specific type of deep learning architecture, allows the spatial resolution of remote sensing images to be increased by “hallucinating” non-existent details. In this study, we show that ESRGAN create better quality images when trained on thematically classified images than when trained on a wide variety of examples. All things being equal, we further show that the algorithm performs better on some themes than it does on others. Texture analysis shows that these performances are correlated with the inverse difference moment and entropy of the images.


Author(s):  
Ajeet Ram Pathak ◽  
Somesh Bhalsing ◽  
Shivani Desai ◽  
Monica Gandhi ◽  
Pranathi Patwardhan

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15 ◽  
Author(s):  
Hao Chao ◽  
Liang Dong ◽  
Yongli Liu ◽  
Baoyun Lu

Emotion recognition based on multichannel electroencephalogram (EEG) signals is a key research area in the field of affective computing. Traditional methods extract EEG features from each channel based on extensive domain knowledge and ignore the spatial characteristics and global synchronization information across all channels. This paper proposes a global feature extraction method that encapsulates the multichannel EEG signals into gray images. The maximal information coefficient (MIC) for all channels was first measured. Subsequently, an MIC matrix was constructed according to the electrode arrangement rules and represented by an MIC gray image. Finally, a deep learning model designed with two principal component analysis convolutional layers and a nonlinear transformation operation extracted the spatial characteristics and global interchannel synchronization features from the constructed feature images, which were then input to support vector machines to perform the emotion recognition tasks. Experiments were conducted on the benchmark dataset for emotion analysis using EEG, physiological, and video signals. The experimental results demonstrated that the global synchronization features and spatial characteristics are beneficial for recognizing emotions and the proposed deep learning model effectively mines and utilizes the two salient features.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2972
Author(s):  
Qinghua Gao ◽  
Shuo Jiang ◽  
Peter B. Shull

Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables.


Author(s):  
Wei Zhang ◽  
Gaoliang Peng ◽  
Chuanhao Li ◽  
Yuanhang Chen ◽  
Zhujun Zhang

Intelligent fault diagnosis techniques have replaced the time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning model can improve the accuracy of intelligent fault diagnosis with the help of its multilayer nonlinear mapping ability. This paper has proposed a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in first convolutional layer for extracting feature and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform state of the art DNN model which is based on frequency features under different working load and noisy environment.


2020 ◽  
Author(s):  
Jiarui Feng ◽  
Amanda Zeng ◽  
Yixin Chen ◽  
Philip Payne ◽  
Fuhai Li

AbstractUncovering signaling links or cascades among proteins that potentially regulate tumor development and drug response is one of the most critical and challenging tasks in cancer molecular biology. Inhibition of the targets on the core signaling cascades can be effective as novel cancer treatment regimens. However, signaling cascades inference remains an open problem, and there is a lack of effective computational models. The widely used gene co-expression network (no-direct signaling cascades) and shortest-path based protein-protein interaction (PPI) network analysis (with too many interactions, and did not consider the sparsity of signaling cascades) were not specifically designed to predict the direct and sparse signaling cascades. To resolve the challenges, we proposed a novel deep learning model, deepSignalingLinkNet, to predict signaling cascades by integrating transcriptomics data and copy number data of a large set of cancer samples with the protein-protein interactions (PPIs) via a novel deep graph neural network model. Different from the existing models, the proposed deep learning model was trained using the curated KEGG signaling pathways to identify the informative omics and PPI topology features in the data-driven manner to predict the potential signaling cascades. The validation results indicated the feasibility of signaling cascade prediction using the proposed deep learning models. Moreover, the trained model can potentially predict the signaling cascades among the new proteins by transferring the learned patterns on the curated signaling pathways. The code was available at: https://github.com/fuhaililab/deepSignalingPathwayPrediction.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 1048 ◽  
Author(s):  
Muhammad Ather Iqbal Hussain ◽  
Babar Khan ◽  
Zhijie Wang ◽  
Shenyi Ding

The weave pattern (texture) of woven fabric is considered to be an important factor of the design and production of high-quality fabric. Traditionally, the recognition of woven fabric has a lot of challenges due to its manual visual inspection. Moreover, the approaches based on early machine learning algorithms directly depend on handcrafted features, which are time-consuming and error-prone processes. Hence, an automated system is needed for classification of woven fabric to improve productivity. In this paper, we propose a deep learning model based on data augmentation and transfer learning approach for the classification and recognition of woven fabrics. The model uses the residual network (ResNet), where the fabric texture features are extracted and classified automatically in an end-to-end fashion. We evaluated the results of our model using evaluation metrics such as accuracy, balanced accuracy, and F1-score. The experimental results show that the proposed model is robust and achieves state-of-the-art accuracy even when the physical properties of the fabric are changed. We compared our results with other baseline approaches and a pretrained VGGNet deep learning model which showed that the proposed method achieved higher accuracy when rotational orientations in fabric and proper lighting effects were considered.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6126
Author(s):  
Tae Hyong Kim ◽  
Ahnryul Choi ◽  
Hyun Mu Heo ◽  
Hyunggun Kim ◽  
Joung Hwan Mun

Pre-impact fall detection can detect a fall before a body segment hits the ground. When it is integrated with a protective system, it can directly prevent an injury due to hitting the ground. An impact acceleration peak magnitude is one of key measurement factors that can affect the severity of an injury. It can be used as a design parameter for wearable protective devices to prevent injuries. In our study, a novel method is proposed to predict an impact acceleration magnitude after loss of balance using a single inertial measurement unit (IMU) sensor and a sequential-based deep learning model. Twenty-four healthy participants participated in this study for fall experiments. Each participant worn a single IMU sensor on the waist to collect tri-axial accelerometer and angular velocity data. A deep learning method, bi-directional long short-term memory (LSTM) regression, is applied to predict a fall’s impact acceleration magnitude prior to fall impact (a fall in five directions). To improve prediction performance, a data augmentation technique with increment of dataset is applied. Our proposed model showed a mean absolute percentage error (MAPE) of 6.69 ± 0.33% with r value of 0.93 when all three different types of data augmentation techniques are applied. Additionally, there was a significant reduction of MAPE by 45.2% when the number of training datasets was increased by 4-fold. These results show that impact acceleration magnitude can be used as an activation parameter for fall prevention such as in a wearable airbag system by optimizing deployment process to minimize fall injury in real time.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1182
Author(s):  
Cheng-Yi Kao ◽  
Chiao-Yun Lin ◽  
Cheng-Chen Chao ◽  
Han-Sheng Huang ◽  
Hsing-Yu Lee ◽  
...  

We aimed to set up an Automated Radiology Alert System (ARAS) for the detection of pneumothorax in chest radiographs by a deep learning model, and to compare its efficiency and diagnostic performance with the existing Manual Radiology Alert System (MRAS) at the tertiary medical center. This study retrospectively collected 1235 chest radiographs with pneumothorax labeling from 2013 to 2019, and 337 chest radiographs with negative findings in 2019 were separated into training and validation datasets for the deep learning model of ARAS. The efficiency before and after using the model was compared in terms of alert time and report time. During parallel running of the two systems from September to October 2020, chest radiographs prospectively acquired in the emergency department with age more than 6 years served as the testing dataset for comparison of diagnostic performance. The efficiency was improved after using the model, with mean alert time improving from 8.45 min to 0.69 min and the mean report time from 2.81 days to 1.59 days. The comparison of the diagnostic performance of both systems using 3739 chest radiographs acquired during parallel running showed that the ARAS was better than the MRAS as assessed in terms of sensitivity (recall), area under receiver operating characteristic curve, and F1 score (0.837 vs. 0.256, 0.914 vs. 0.628, and 0.754 vs. 0.407, respectively), but worse in terms of positive predictive value (PPV) (precision) (0.686 vs. 1.000). This study had successfully designed a deep learning model for pneumothorax detection on chest radiographs and set up an ARAS with improved efficiency and overall diagnostic performance.


Sign in / Sign up

Export Citation Format

Share Document