Acrophobia Quantified by EEG Based on CNN Incorporating Granger Causality

Author(s):  
Fo Hu ◽  
Hong Wang ◽  
Qiaoxiu Wang ◽  
Naishi Feng ◽  
Jichi Chen ◽  
...  

The aim of this study is to quantify acrophobia and provide safety advices for high-altitude workers. Considering that acrophobia is a fuzzy quantity that cannot be accurately evaluated by conventional detection methods, we propose a comprehensive solution to quantify acrophobia. Specifically, this study simulates a virtual reality environment called High-altitude Plank Walking Challenge, which provides a safe and controlled experimental environment for subjects. Besides, a method named Granger Causality Convolutional Neural Network (GCCNN) combining convolutional neural network and Granger causality functional brain network is proposed to analyze the subjects’ noninvasive scalp EEG signals. Here, the GCCNN method is used to distinguish the subjects with severe acrophobia, moderate acrophobia, and no acrophobia in a three-class classification task or no acrophobia and acrophobia in a two-class classification task. Compared with the mainstream methods, the GCCNN method achieves better classification performance, with an accuracy of 98.74% for the two-class classification task (no acrophobia versus acrophobia) and of 98.47% for the three-class classification task (no acrophobia versus moderate acrophobia versus severe acrophobia). Consequently, our proposed GCCNN method can provide more accurate quantitative results than the comparative methods, making it to be more competitive in further practical applications.

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2021 ◽  
Vol 11 (13) ◽  
pp. 6085
Author(s):  
Jesus Salido ◽  
Vanesa Lomas ◽  
Jesus Ruiz-Santaquiteria ◽  
Oscar Deniz

There is a great need to implement preventive mechanisms against shootings and terrorist acts in public spaces with a large influx of people. While surveillance cameras have become common, the need for monitoring 24/7 and real-time response requires automatic detection methods. This paper presents a study based on three convolutional neural network (CNN) models applied to the automatic detection of handguns in video surveillance images. It aims to investigate the reduction of false positives by including pose information associated with the way the handguns are held in the images belonging to the training dataset. The results highlighted the best average precision (96.36%) and recall (97.23%) obtained by RetinaNet fine-tuned with the unfrozen ResNet-50 backbone and the best precision (96.23%) and F1 score values (93.36%) obtained by YOLOv3 when it was trained on the dataset including pose information. This last architecture was the only one that showed a consistent improvement—around 2%—when pose information was expressly considered during training.


Author(s):  
xu chen ◽  
Shibo Wang ◽  
Houguang Liu ◽  
Jianhua Yang ◽  
Songyong Liu ◽  
...  

Abstract Many data-driven coal gangue recognition (CGR) methods based on the vibration or sound of collapsed coal and gangue have been proposed to achieve automatic CGR, which is important for realizing intelligent top-coal caving. However, the strong background noise and complex environment in underground coal mines render this task challenging in practical applications. Inspired by the fact that workers distinguish coal and gangue from underground noise by listening to the hydraulic support sound, we propose an auditory model based CGR method that simulates human auditory recognition by combining an auditory spectrogram with a convolutional neural network (CNN). First, we adjust the characteristic frequency (CF) distribution of the auditory peripheral model (APM) based on the spectral characteristics of collapsed sound signals from coal and gangue and then process the sound signals using the adjusted APM to obtain inferior colliculus auditory signals with multiple CFs. Subsequently, the auditory signals of all CFs are converted into gray images separately and then concatenated into a multichannel auditory spectrum along the channel dimension. Finally, we input the multichannel auditory spectrum as a feature map to the two-dimensional CNN, whose convolutional layers are used to automatically extract features, and the fully connected layer and softmax layer are used to flatten features and predict the recognition result, respectively. The CNN is optimized for the CGR based on a comparison study of four typical types of CNN structures with different network training hyperparameters. The experimental results show that this method affords an accurate CGR with a recognition accuracy of 99.5%. Moreover, this method offers excellent noise immunity compared with typically used CGR methods under various noisy conditions.


Author(s):  
Hongguo Su ◽  
Mingyuan Zhang ◽  
Shengyuan Li ◽  
Xuefeng Zhao

In the last couple of years, advancements in the deep learning, especially in convolutional neural networks, proved to be a boon for the image classification and recognition tasks. One of the important practical applications of object detection and image classification can be for security enhancement. If dangerous objects or scenes can be identified automatically, then a lot of accidents can be prevented. For this purpose, in this paper we made use of state-of-the-art implementation of Faster Region-based Convolutional Neural Network (Faster R-CNN) based on the monitoring video of hoisting sites to train a model to detect the dangerous object and the worker. By extracting the locations of them, object-human interactions during hoisting, mainly for changes in their spatial location relationship, can be understood whereby estimating whether the scene is safe or dangerous. Experimental results showed that the pre-trained model achieved good performance with a high mean average precision of 97.66% on object detection and the proposed method fulfilled the goal of dangerous scenes recognition perfectly.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Xin Wang ◽  
Yanshuang Ren ◽  
Wensheng Zhang

Study of functional brain network (FBN) based on functional magnetic resonance imaging (fMRI) has proved successful in depression disorder classification. One popular approach to construct FBN is Pearson correlation. However, it only captures pairwise relationship between brain regions, while it ignores the influence of other brain regions. Another common issue existing in many depression disorder classification methods is applying only single local feature extracted from constructed FBN. To address these issues, we develop a new method to classify fMRI data of patients with depression and healthy controls. First, we construct the FBN using a sparse low-rank model, which considers the relationship between two brain regions given all the other brain regions. Moreover, it can automatically remove weak relationship and retain the modular structure of FBN. Secondly, FBN are effectively measured by eight graph-based features from different aspects. Tested on fMRI data of 31 patients with depression and 29 healthy controls, our method achieves 95% accuracy, 96.77% sensitivity, and 93.10% specificity, which outperforms the Pearson correlation FBN and sparse FBN. In addition, the combination of graph-based features in our method further improves classification performance. Moreover, we explore the discriminative brain regions that contribute to depression disorder classification, which can help understand the pathogenesis of depression disorder.


2019 ◽  
Vol 14 (1) ◽  
pp. 124-134 ◽  
Author(s):  
Shuai Zhang ◽  
Yong Chen ◽  
Xiaoling Huang ◽  
Yishuai Cai

Online feedback is an effective way of communication between government departments and citizens. However, the daily high number of public feedbacks has increased the burden on government administrators. The deep learning method is good at automatically analyzing and extracting deep features of data, and then improving the accuracy of classification prediction. In this study, we aim to use the text classification model to achieve the automatic classification of public feedbacks to reduce the work pressure of administrator. In particular, a convolutional neural network model combined with word embedding and optimized by differential evolution algorithm is adopted. At the same time, we compared it with seven common text classification models, and the results show that the model we explored has good classification performance under different evaluation metrics, including accuracy, precision, recall, and F1-score.


2021 ◽  
Vol 14 ◽  
Author(s):  
Jingjing Gao ◽  
Mingren Chen ◽  
Yuanyuan Li ◽  
Yachun Gao ◽  
Yanling Li ◽  
...  

Autism spectrum disorder (ASD) is a range of neurodevelopmental disorders with behavioral and cognitive impairment and brings huge burdens to the patients’ families and the society. To accurately identify patients with ASD from typical controls is important for early detection and early intervention. However, almost all the current existing classification methods for ASD based on structural MRI (sMRI) mainly utilize the independent local morphological features and do not consider the covariance patterns of these features between regions. In this study, by combining the convolutional neural network (CNN) and individual structural covariance network, we proposed a new framework to classify ASD patients with sMRI data from the ABIDE consortium. Moreover, gradient-weighted class activation mapping (Grad-CAM) was applied to characterize the weight of features contributing to the classification. The experimental results showed that our proposed method outperforms the currently used methods for classifying ASD patients with the ABIDE data and achieves a high classification accuracy of 71.8% across different sites. Furthermore, the discriminative features were found to be mainly located in the prefrontal cortex and cerebellum, which may be the early biomarkers for the diagnosis of ASD. Our study demonstrated that CNN is an effective tool to build the framework for the diagnosis of ASD with individual structural covariance brain network.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Qiang Cai ◽  
Fenghai Li ◽  
Yifan Chen ◽  
Haisheng Li ◽  
Jian Cao ◽  
...  

Along with the strong representation of the convolutional neural network (CNN), image classification tasks have achieved considerable progress. However, majority of works focus on designing complicated and redundant architectures for extracting informative features to improve classification performance. In this study, we concentrate on rectifying the incomplete outputs of CNN. To be concrete, we propose an innovative image classification method based on Label Rectification Learning (LRL) through kernel extreme learning machine (KELM). It mainly consists of two steps: (1) preclassification, extracting incomplete labels through a pretrained CNN, and (2) label rectification, rectifying the generated incomplete labels by the KELM to obtain the rectified labels. Experiments conducted on publicly available datasets demonstrate the effectiveness of our method. Notably, our method is extensible which can be easily integrated with off-the-shelf networks for improving performance.


Sign in / Sign up

Export Citation Format

Share Document