scholarly journals A convolutional-recurrent neural network approach to resting-state EEG classification in Parkinsons disease

Author(s):  
Soojin Lee ◽  
Ramy Hussein ◽  
Rabab Ward ◽  
Jane Z Wang ◽  
Martin J. McKeown

Background: Parkinsons disease (PD) is expected to become more common, particularly with an aging population. Diagnosis and monitoring of the disease typically rely on the laborious examination of physical symptoms by medical experts, which is necessarily limited and may not detect the prodromal stages of the disease. New Method: We propose a lightweight (20K parameters) deep learning model, to discriminate between resting-state EEG recorded from people with PD and healthy controls. The proposed CRNN model consists of convolutional neural networks (CNN) and a recurrent neural network (RNN) with gated recurrent units (GRUs). The 1D CNN layers are designed to extract spatiotemporal features across EEG channels, which are subsequently supplied to the GRUs to discover temporal features pertinent to the classification. Results: The CRNN model achieved 99.2% accuracy, 98.9% precision, and 99.4% recall in classifying PD from healthy controls (HC). Interrogating the model, we further demonstrate that the model is sensitive to dopaminergic medication effects and predominantly uses phase information of the EEG signals. Comparison with Existing Methods: The CRNN model achieves superior performance compared to baseline machine learning methods and other recently proposed deep learning models. Conclusion: The approach proposed in this study adequately extracts the spatial and temporal features in multi-channel EEG signals that enable the accurate differentiation between PD and HC. It has excellent potential for use as an oscillatory biomarker for assisting in the diagnosis and monitoring of people with PD. Future studies to further improve and validate the model performance in clinical practice are warranted.

2021 ◽  
Vol 11 (10) ◽  
pp. 2618-2625
Author(s):  
R. T. Subhalakshmi ◽  
S. Appavu Alias Balamurugan ◽  
S. Sasikala

In recent times, the COVID-19 epidemic turn out to be increased in an extreme manner, by the accessibility of an inadequate amount of rapid testing kits. Consequently, it is essential to develop the automated techniques for Covid-19 detection to recognize the existence of disease from the radiological images. The most ordinary symptoms of COVID-19 are sore throat, fever, and dry cough. Symptoms are able to progress to a rigorous type of pneumonia with serious impediment. As medical imaging is not recommended currently in Canada for crucial COVID-19 diagnosis, systems of computer-aided diagnosis might aid in early COVID-19 abnormalities detection and help out to observe the disease progression, reduce mortality rates potentially. In this approach, a deep learning based design for feature extraction and classification is employed for automatic COVID-19 diagnosis from computed tomography (CT) images. The proposed model operates on three main processes based pre-processing, feature extraction, and classification. The proposed design incorporates the fusion of deep features using GoogLe Net models. Finally, Multi-scale Recurrent Neural network (RNN) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the proposed model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity, specificity, and accuracy.


2021 ◽  
Vol 15 ◽  
Author(s):  
Karun Thanjavur ◽  
Dionissios T. Hristopulos ◽  
Arif Babul ◽  
Kwang Moo Yi ◽  
Naznin Virji-Babul

Artificial neural networks (ANNs) are showing increasing promise as decision support tools in medicine and particularly in neuroscience and neuroimaging. Recently, there has been increasing work on using neural networks to classify individuals with concussion using electroencephalography (EEG) data. However, to date the need for research grade equipment has limited the applications to clinical environments. We recently developed a deep learning long short-term memory (LSTM) based recurrent neural network to classify concussion using raw, resting state data using 64 EEG channels and achieved high accuracy in classifying concussion. Here, we report on our efforts to develop a clinically practical system using a minimal subset of EEG sensors. EEG data from 23 athletes who had suffered a sport-related concussion and 35 non-concussed, control athletes were used for this study. We tested and ranked each of the original 64 channels based on its contribution toward the concussion classification performed by the original LSTM network. The top scoring channels were used to train and test a network with the same architecture as the previously trained network. We found that with only six of the top scoring channels the classifier identified concussions with an accuracy of 94%. These results show that it is possible to classify concussion using raw, resting state data from a small number of EEG sensors, constituting a first step toward developing portable, easy to use EEG systems that can be used in a clinical setting.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Karun Thanjavur ◽  
Arif Babul ◽  
Brandon Foran ◽  
Maya Bielecki ◽  
Adam Gilchrist ◽  
...  

AbstractConcussion is a global health concern. Despite its high prevalence, a sound understanding of the mechanisms underlying this type of diffuse brain injury remains elusive. It is, however, well established that concussions cause significant functional deficits; that children and youths are disproportionately affected and have longer recovery time than adults; and that individuals suffering from a concussion are more prone to experience additional concussions, with each successive injury increasing the risk of long term neurological and mental health complications. Currently, the most significant challenge in concussion management is the lack of objective, clinically- accepted, brain-based approaches for determining whether an athlete has suffered a concussion. Here, we report on our efforts to address this challenge. Specifically, we introduce a deep learning long short-term memory (LSTM)-based recurrent neural network that is able to distinguish between non-concussed and acute post-concussed adolescent athletes using only short (i.e. 90 s long) samples of resting state EEG data as input. The athletes were neither required to perform a specific task nor expected to respond to a stimulus during data collection. The acquired EEG data were neither filtered, cleaned of artefacts, nor subjected to explicit feature extraction. The LSTM network was trained and validated using data from 27 male, adolescent athletes with sports related concussion, benchmarked against 35 non-concussed adolescent athletes. During rigorous testing, the classifier consistently identified concussions with an accuracy of > 90% and achieved an ensemble median Area Under the Receiver Operating Characteristic Curve (ROC/AUC) equal to 0.971. This is the first instance of a high-performing classifier that relies only on easy-to-acquire resting state, raw EEG data. Our concussion classifier represents a promising first step towards the development of an easy-to-use, objective, brain-based, automatic classification of concussion at an individual level.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 81
Author(s):  
Jianbin Xiong ◽  
Dezheng Yu ◽  
Shuangyin Liu ◽  
Lei Shu ◽  
Xiaochan Wang ◽  
...  

Plant phenotypic image recognition (PPIR) is an important branch of smart agriculture. In recent years, deep learning has achieved significant breakthroughs in image recognition. Consequently, PPIR technology that is based on deep learning is becoming increasingly popular. First, this paper introduces the development and application of PPIR technology, followed by its classification and analysis. Second, it presents the theory of four types of deep learning methods and their applications in PPIR. These methods include the convolutional neural network, deep belief network, recurrent neural network, and stacked autoencoder, and they are applied to identify plant species, diagnose plant diseases, etc. Finally, the difficulties and challenges of deep learning in PPIR are discussed.


Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 931
Author(s):  
Kecheng Peng ◽  
Xiaoqun Cao ◽  
Bainian Liu ◽  
Yanan Guo ◽  
Wenlong Tian

The intensity variation of the South Asian high (SAH) plays an important role in the formation and extinction of many kinds of mesoscale systems, including tropical cyclones, southwest vortices in the Asian summer monsoon (ASM) region, and the precipitation in the whole Asia Europe region, and the SAH has a vortex symmetrical structure; its dynamic field also has the symmetry form. Not enough previous studies focus on the variation of SAH daily intensity. The purpose of this study is to establish a day-to-day prediction model of the SAH intensity, which can accurately predict not only the interannual variation but also the day-to-day variation of the SAH. Focusing on the summer period when the SAH is the strongest, this paper selects the geopotential height data between 1948 and 2020 from NCEP to construct the SAH intensity datasets. Compared with the classical deep learning methods of various kinds of efficient time series prediction model, we ultimately combine the Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) method, which has the ability to deal with the nonlinear and unstable single system, with the Permutation Entropy (PE) method, which can extract the SAH intensity feature of IMF decomposed by CEEMDAN, and the Convolution-based Gated Recurrent Neural Network (ConvGRU) model is used to train, test, and predict the intensity of the SAH. The prediction results show that the combination of CEEMDAN and ConvGRU can have a higher accuracy and more stable prediction ability than the traditional deep learning model. After removing the redundant features in the time series, the prediction accuracy of the SAH intensity is higher than that of the classical model, which proves that the method has good applicability for the prediction of nonlinear systems in the atmosphere.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6460
Author(s):  
Dae-Yeon Kim ◽  
Dong-Sik Choi ◽  
Jaeyun Kim ◽  
Sung Wan Chun ◽  
Hyo-Wook Gil ◽  
...  

In this study, we propose a personalized glucose prediction model using deep learning for hospitalized patients who experience Type-2 diabetes. We aim for our model to assist the medical personnel who check the blood glucose and control the amount of insulin doses. Herein, we employed a deep learning algorithm, especially a recurrent neural network (RNN), that consists of a sequence processing layer and a classification layer for the glucose prediction. We tested a simple RNN, gated recurrent unit (GRU), and long-short term memory (LSTM) and varied the architectures to determine the one with the best performance. For that, we collected data for a week using a continuous glucose monitoring device. Type-2 inpatients are usually experiencing bad health conditions and have a high variability of glucose level. However, there are few studies on the Type-2 glucose prediction model while many studies performed on Type-1 glucose prediction. This work has a contribution in that the proposed model exhibits a comparative performance to previous works on Type-1 patients. For 20 in-hospital patients, we achieved an average root mean squared error (RMSE) of 21.5 and an Mean absolute percentage error (MAPE) of 11.1%. The GRU with a single RNN layer and two dense layers was found to be sufficient to predict the glucose level. Moreover, to build a personalized model, at most, 50% of data are required for training.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Sign in / Sign up

Export Citation Format

Share Document