Deep learning of early brain imaging to predict post-arrest electroencephalography

Author(s):  
Jonathan Elmer ◽  
Chang Liu ◽  
Matthew Pease ◽  
Dooman Arefan ◽  
Patrick J. Coppler ◽  
...  
Keyword(s):  
Author(s):  
Rayyan Manwar ◽  
Xin Li ◽  
Sadreddin Mahmoodkalayeh ◽  
Eishi Asano ◽  
Dongxiao Zhu ◽  
...  
Keyword(s):  

2018 ◽  
Author(s):  
Chentao Wen ◽  
Takuya Miura ◽  
Yukako Fujie ◽  
Takayuki Teramoto ◽  
Takeshi Ishihara ◽  
...  

AbstractThe brain is a complex system that operates based on coordinated neuronal activities. Brain-wide cellular calcium imaging techniques have quickly advanced in recent years and become powerful tools for understanding the neuronal activities of small animal models. The whole brain imaging generally requires to extract the neuronal activities from three-dimensional (3D) image series. Unfortunately, the 3D image series are obtained under imaging conditions different among laboratories and extracting neuronal activities from the data requires multiple processes. Therefore researchers need to develop their own software, which has prevented the application of whole-brain imaging experiments in more laboratories. Here, we combined traditional image processing techniques with the powerful deep-learning method which can be flexibly modified to fit 3D image data in the nematode Caenorhabditis elegans obtained under different conditions. We first trained the 3D U-net deep network to classify each pixel into cell and non-cell categories. Cells merged as a whole region were further separated into individual cells by watershed segmentation. The cells were then tracked in 3D space over time with the combination of a feedforward network and a point set registration method to use local and global relative positions of the cells, respectively. Remarkably, one manually annotated 3D image combined with data augmentation was sufficient for training the deep networks to obtain satisfactory tracking results. Our method correctly tracked more than 98% of neurons in three different image datasets and successfully extracted brain-wide neuronal activities. Our method worked well even when the sampling rate was reduced: 86% correct in case 4/5 frames were removed, and when artificial noise was added into the raw images: 91% correct in case 35 times of background-level noise was added. Our results proved that deep learning is widely applicable to different datasets and can help us in establishing a flexible pipeline for extracting whole brain activities.


2020 ◽  
Author(s):  
Bin Lu ◽  
Hui-Xian Li ◽  
Zhi-Kai Chang ◽  
Le Li ◽  
Ning-Xuan Chen ◽  
...  

AbstractBeyond detecting brain damage or tumors, little success has been attained on identifying individual differences and brain disorders with magnetic resonance imaging (MRI). Here, we sought to build industrial-grade brain imaging-based classifiers to infer two types of such inter-individual differences: sex and Alzheimer’s disease (AD), using deep learning/transfer learning on big data. We pooled brain structural data from 217 sites/scanners to constitute the largest brain MRI sample to date (85,721 samples from 50,876 participants), and applied a state-of-the-art deep convolutional neural network, Inception-ResNet-V2, to build a sex classifier with high generalizability. In cross-dataset-validation, the sex classification model was able to classify the sex of any participant with brain structural imaging data from any scanner with 94.9% accuracy. We then applied transfer learning based on this model to objectively diagnose AD, achieving 88.4% accuracy in cross-site-validation on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset and 91.2% / 86.1% accuracy for a direct test on two unseen independent datasets (AIBL / OASIS). Directly testing this AD classifier on brain images of unseen mild cognitive impairment (MCI) patients, the model correctly predicted 63.2% who eventually converted into AD, versus predicting 22.1% as AD who did not convert into AD during follow-up. Predicted scores of the AD classifier correlated significantly with illness severity. By contrast, the transfer learning framework was unable to achieve practical accuracy for psychiatric disorders. To improve interpretability of the deep learning models, occlusion tests revealed that hypothalamus, superior vermis, thalamus, amygdala and limbic system areas were critical for predicting sex; hippocampus, parahippocampal gyrus, putamen and insula played key roles in predicting AD. Our trained model, code, preprocessed data and an online prediction website have been openly-shared to advance the clinical utility of brain imaging.


2021 ◽  
Vol 15 ◽  
Author(s):  
Laura Tomaz Da Silva ◽  
Nathalia Bianchini Esper ◽  
Duncan D. Ruiz ◽  
Felipe Meneguzzi ◽  
Augusto Buchweitz

Problem: Brain imaging studies of mental health and neurodevelopmental disorders have recently included machine learning approaches to identify patients based solely on their brain activation. The goal is to identify brain-related features that generalize from smaller samples of data to larger ones; in the case of neurodevelopmental disorders, finding these patterns can help understand differences in brain function and development that underpin early signs of risk for developmental dyslexia. The success of machine learning classification algorithms on neurofunctional data has been limited to typically homogeneous data sets of few dozens of participants. More recently, larger brain imaging data sets have allowed for deep learning techniques to classify brain states and clinical groups solely from neurofunctional features. Indeed, deep learning techniques can provide helpful tools for classification in healthcare applications, including classification of structural 3D brain images. The adoption of deep learning approaches allows for incremental improvements in classification performance of larger functional brain imaging data sets, but still lacks diagnostic insights about the underlying brain mechanisms associated with disorders; moreover, a related challenge involves providing more clinically-relevant explanations from the neural features that inform classification.Methods: We target this challenge by leveraging two network visualization techniques in convolutional neural network layers responsible for learning high-level features. Using such techniques, we are able to provide meaningful images for expert-backed insights into the condition being classified. We address this challenge using a dataset that includes children diagnosed with developmental dyslexia, and typical reader children.Results: Our results show accurate classification of developmental dyslexia (94.8%) from the brain imaging alone, while providing automatic visualizations of the features involved that match contemporary neuroscientific knowledge (brain regions involved in the reading process for the dyslexic reader group and brain regions associated with strategic control and attention processes for the typical reader group).Conclusions: Our visual explanations of deep learning models turn the accurate yet opaque conclusions from the models into evidence to the condition being studied.


2020 ◽  
Vol 13 (10) ◽  
Author(s):  
Rayyan Manwar ◽  
Xin Li ◽  
Sadreddin Mahmoodkalayeh ◽  
Eishi Asano ◽  
Dongxiao Zhu ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document