scholarly journals EEG and Deep Learning Based Brain Cognitive Function Classification

Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 104
Author(s):  
Saraswati Sridhar ◽  
Vidya Manian

Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, and motor-imagery) that employs a subject-agnostic Bidirectional Long Short-Term Memory (BLSTM) Network is developed to assess cognitive functions and identify its relationship with brain signal features, which is hypothesized to consistently indicate cognitive decline. Testing occurred with healthy subjects of age 20–40, 40–60, and >60, and mildly cognitive impaired subjects. Auditory and olfactory stimuli were presented to the subjects and the subjects imagined and conducted movement of each arm during which Electroencephalogram (EEG)/Electromyogram (EMG) signals were recorded. A deep BLSTM Neural Network is trained with Principal Component features from evoked signals and assesses their corresponding pathways. Wavelet analysis is used to decompose evoked signals and calculate the band power of component frequency bands. This deep learning system performs better than conventional deep neural networks in detecting MCI. Most features studied peaked at the age range 40–60 and were lower for the MCI group than for any other group tested. Detection accuracy of left-hand motor imagery signals best indicated cognitive aging (p = 0.0012); here, the mean classification accuracy per age group declined from 91.93% to 81.64%, and is 69.53% for MCI subjects. Motor-imagery-evoked band power, particularly in gamma bands, best indicated (p = 0.007) cognitive aging. Although the classification accuracy of the potentials effectively distinguished cognitive aging from MCI (p < 0.05), followed by gamma-band power.

2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Yinghao Chu ◽  
Chen Huang ◽  
Xiaodan Xie ◽  
Bohai Tan ◽  
Shyam Kamal ◽  
...  

This study proposes a multilayer hybrid deep-learning system (MHS) to automatically sort waste disposed of by individuals in the urban public area. This system deploys a high-resolution camera to capture waste image and sensors to detect other useful feature information. The MHS uses a CNN-based algorithm to extract image features and a multilayer perceptrons (MLP) method to consolidate image features and other feature information to classify wastes as recyclable or the others. The MHS is trained and validated against the manually labelled items, achieving overall classification accuracy higher than 90% under two different testing scenarios, which significantly outperforms a reference CNN-based method relying on image-only inputs.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_4) ◽  
Author(s):  
ChienYu Chi ◽  
Yen-Pin Chen ◽  
Adrian Winkler ◽  
Kuan-Chun Fu ◽  
Fie Xu ◽  
...  

Introduction: Predicting rare catastrophic events is challenging due to lack of targets. Here we employed a multi-task learning method and demonstrated that substantial gains in accuracy and generalizability was achieved by sharing representations between related tasks Methods: Starting from Taiwan National Health Insurance Research Database, we selected adult people (>20 year) experienced in-hospital cardiac arrest but not out-of-hospital cardiac arrest during 8 years (2003-2010), and built a dataset using de-identified claims of Emergency Department (ED) and hospitalization. Final dataset had 169,287 patients, randomly split into 3 sections, train 70%, validation 15%, and test 15%.Two outcomes, 30-day readmission and 30-day mortality are chosen. We constructed the deep learning system in two steps. We first used a taxonomy mapping system Text2Node to generate a distributed representation for each concept. We then applied a multilevel hierarchical model based on long short-term memory (LSTM) architecture. Multi-task models used gradient similarity to prioritize the desired task over auxiliary tasks. Single-task models were trained for each desired task. All models share the same architecture and are trained with the same input data Results: Each model was optimized to maximize AUROC on the validation set with the final metrics calculated on the held-out test set. We demonstrated multi-task deep learning models outperform single task deep learning models on both tasks. While readmission had roughly 30% positives and showed miniscule improvements, the mortality task saw more improvement between models. We hypothesize that this is a result of the data imbalance, mortality occurred roughly 5% positive; the auxiliary tasks help the model interpret the data and generalize better. Conclusion: Multi-task deep learning models outperform single task deep learning models in predicting 30-day readmission and mortality in in-hospital cardiac arrest patients.


Viruses ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 769 ◽  
Author(s):  
Ahmed Sedik ◽  
Abdullah M Iliyasu ◽  
Basma Abd El-Rahiem ◽  
Mohammed E. Abdel Samea ◽  
Asmaa Abdel-Raheem ◽  
...  

This generation faces existential threats because of the global assault of the novel Corona virus 2019 (i.e., COVID-19). With more than thirteen million infected and nearly 600000 fatalities in 188 countries/regions, COVID-19 is the worst calamity since the World War II. These misfortunes are traced to various reasons, including late detection of latent or asymptomatic carriers, migration, and inadequate isolation of infected people. This makes detection, containment, and mitigation global priorities to contain exposure via quarantine, lockdowns, work/stay at home, and social distancing that are focused on “flattening the curve”. While medical and healthcare givers are at the frontline in the battle against COVID-19, it is a crusade for all of humanity. Meanwhile, machine and deep learning models have been revolutionary across numerous domains and applications whose potency have been exploited to birth numerous state-of-the-art technologies utilised in disease detection, diagnoses, and treatment. Despite these potentials, machine and, particularly, deep learning models are data sensitive, because their effectiveness depends on availability and reliability of data. The unavailability of such data hinders efforts of engineers and computer scientists to fully contribute to the ongoing assault against COVID-19. Faced with a calamity on one side and absence of reliable data on the other, this study presents two data-augmentation models to enhance learnability of the Convolutional Neural Network (CNN) and the Convolutional Long Short-Term Memory (ConvLSTM)-based deep learning models (DADLMs) and, by doing so, boost the accuracy of COVID-19 detection. Experimental results reveal improvement in terms of accuracy of detection, logarithmic loss, and testing time relative to DLMs devoid of such data augmentation. Furthermore, average increases of 4% to 11% in COVID-19 detection accuracy are reported in favour of the proposed data-augmented deep learning models relative to the machine learning techniques. Therefore, the proposed algorithm is effective in performing a rapid and consistent Corona virus diagnosis that is primarily aimed at assisting clinicians in making accurate identification of the virus.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Mamunur Rashid ◽  
Minarul Islam ◽  
Norizam Sulaiman ◽  
Bifta Sama Bari ◽  
Ripon Kumar Saha ◽  
...  

2019 ◽  
Vol 3 (2) ◽  
pp. 29 ◽  
Author(s):  
Saraswati Sridhar ◽  
Vidya Manian

Cognitive deterioration caused by illness or aging often occurs before symptoms arise, and its timely diagnosis is crucial to reducing its medical, personal, and societal impacts. Brain–computer interfaces (BCIs) stimulate and analyze key cerebral rhythms, enabling reliable cognitive assessment that can accelerate diagnosis. The BCI system presented analyzes steady-state visually evoked potentials (SSVEPs) elicited in subjects of varying age to detect cognitive aging, predict its magnitude, and identify its relationship with SSVEP features (band power and frequency detection accuracy), which were hypothesized to indicate cognitive decline due to aging. The BCI system was tested with subjects of varying age to assess its ability to detect aging-induced cognitive deterioration. Rectangular stimuli flickering at theta, alpha, and beta frequencies were presented to subjects, and frontal and occipital Electroencephalographic (EEG) responses were recorded. These were processed to calculate detection accuracy for each subject and calculate SSVEP band power. A neural network was trained using the features to predict cognitive age. The results showed potential cognitive deterioration through age-related variations in SSVEP features. Frequency detection accuracy declined after age group 20–40, and band power declined throughout all age groups. SSVEPs generated at theta and alpha frequencies, especially 7.5 Hz, were the best indicators of cognitive deterioration. Here, frequency detection accuracy consistently declined after age group 20–40 from an average of 96.64% to 69.23%. The presented system can be used as an effective diagnosis tool for age-related cognitive decline.


Author(s):  
Muhammad Fawaz Saputra ◽  
Noor Akhmad Setiawan ◽  
Igi Ardiyanto

EEG signals are obtained from an EEG device after recording the user's brain signals. EEG signals can be generated by the user after performing motor movements or imagery tasks. Motor Imagery (MI) is the task of imagining motor movements that resemble the original motor movements. Brain Computer Interface (BCI) bridges interactions between users and applications in performing tasks. Brain Computer Interface (BCI) Competition IV 2a was used in this study. A fully automated correction method of EOG artifacts in EEG recordings was applied in order to remove artifacts and Common Spatial Pattern (CSP) to get features that can distinguish motor imagery tasks. In this study, a comparative studies between two deep learning methods was explored, namely Deep Belief Network (DBN) and Long Short Term Memory (LSTM). Usability of both deep learning methods was evaluated using the BCI Competition IV-2a dataset. The experimental results of these two deep learning methods show average accuracy of 50.35% for DBN and 49.65% for LSTM.


2020 ◽  
Vol 36 (12) ◽  
pp. 3856-3862
Author(s):  
Di Jin ◽  
Peter Szolovits

Abstract Motivation In evidence-based medicine, defining a clinical question in terms of the specific patient problem aids the physicians to efficiently identify appropriate resources and search for the best available evidence for medical treatment. In order to formulate a well-defined, focused clinical question, a framework called PICO is widely used, which identifies the sentences in a given medical text that belong to the four components typically reported in clinical trials: Participants/Problem (P), Intervention (I), Comparison (C) and Outcome (O). In this work, we propose a novel deep learning model for recognizing PICO elements in biomedical abstracts. Based on the previous state-of-the-art bidirectional long-short-term memory (bi-LSTM) plus conditional random field architecture, we add another layer of bi-LSTM upon the sentence representation vectors so that the contextual information from surrounding sentences can be gathered to help infer the interpretation of the current one. In addition, we propose two methods to further generalize and improve the model: adversarial training and unsupervised pre-training over large corpora. Results We tested our proposed approach over two benchmark datasets. One is the PubMed-PICO dataset, where our best results outperform the previous best by 5.5%, 7.9% and 5.8% for P, I and O elements in terms of F1 score, respectively. And for the other dataset named NICTA-PIBOSO, the improvements for P/I/O elements are 3.9%, 15.6% and 1.3% in F1 score, respectively. Overall, our proposed deep learning model can obtain unprecedented PICO element detection accuracy while avoiding the need for any manual feature selection. Availability and implementation Code is available at https://github.com/jind11/Deep-PICO-Detection.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zhenbo Lu ◽  
Wei Zhou ◽  
Shixiang Zhang ◽  
Chen Wang

Quick and accurate crash detection is important for saving lives and improved traffic incident management. In this paper, a feature fusion-based deep learning framework was developed for video-based urban traffic crash detection task, aiming at achieving a balance between detection speed and accuracy with limited computing resource. In this framework, a residual neural network (ResNet) combined with attention modules was proposed to extract crash-related appearance features from urban traffic videos (i.e., a crash appearance feature extractor), which were further fed to a spatiotemporal feature fusion model, Conv-LSTM (Convolutional Long Short-Term Memory), to simultaneously capture appearance (static) and motion (dynamic) crash features. The proposed model was trained by a set of video clips covering 330 crash and 342 noncrash events. In general, the proposed model achieved an accuracy of 87.78% on the testing dataset and an acceptable detection speed (FPS > 30 with GTX 1060). Thanks to the attention module, the proposed model can capture the localized appearance features (e.g., vehicle damage and pedestrian fallen-off) of crashes better than conventional convolutional neural networks. The Conv-LSTM module outperformed conventional LSTM in terms of capturing motion features of crashes, such as the roadway congestion and pedestrians gathering after crashes. Compared to traditional motion-based crash detection model, the proposed model achieved higher detection accuracy. Moreover, it could detect crashes much faster than other feature fusion-based models (e.g., C3D). The results show that the proposed model is a promising video-based urban traffic crash detection algorithm that could be used in practice in the future.


Sign in / Sign up

Export Citation Format

Share Document