Learning a cortical parcellation of the brain robust to the MRI segmentation with convolutional neural networks

2020 ◽  
Vol 61 ◽  
pp. 101639 ◽  
Author(s):  
Benjamin Thyreau ◽  
Yasuyuki Taki
2019 ◽  
Vol 35 (17) ◽  
pp. 3208-3210 ◽  
Author(s):  
Yangzhen Wang ◽  
Feng Su ◽  
Shanshan Wang ◽  
Chaojuan Yang ◽  
Yonglu Tian ◽  
...  

Abstract Motivation Functional imaging at single-neuron resolution offers a highly efficient tool for studying the functional connectomics in the brain. However, mainstream neuron-detection methods focus on either the morphologies or activities of neurons, which may lead to the extraction of incomplete information and which may heavily rely on the experience of the experimenters. Results We developed a convolutional neural networks and fluctuation method-based toolbox (ImageCN) to increase the processing power of calcium imaging data. To evaluate the performance of ImageCN, nine different imaging datasets were recorded from awake mouse brains. ImageCN demonstrated superior neuron-detection performance when compared with other algorithms. Furthermore, ImageCN does not require sophisticated training for users. Availability and implementation ImageCN is implemented in MATLAB. The source code and documentation are available at https://github.com/ZhangChenLab/ImageCN. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Alexey Sulavko ◽  
Pavel Lozhnikov ◽  
Adil Choban ◽  
Denis Stadnikov ◽  
Alexey Nigrey ◽  
...  

Introduction: Electroencephalograms contain information about the individual characteristics of the brain activities and the psychophysiological state of a subject. Purpose: To evaluate the identification potential of EEG, and to develop methods for the identification of users, their psychophysiological states and activities performed on a computer by their EEGs using convolutional neural networks. Results: The information content of EEG rhythms was assessed from the viewpoint of the possibility to identify a person and his/her state. A high accuracy of determining the identity (98.5–99.99% for 10 electrodes, 96.47% for two electrodes Fp1 and Fp2) with a low transit time (2–2.5 s) was achieved. A significant decrease in accuracy was detected if the person was in different psychophysiological states during the training and testing. In earlier studies, this aspect was not given enough attention. A method is proposed for increasing the robustness of personality recognition in altered psychophysiological states. An accuracy of 82–94% was achieved in recognizing states of alcohol intoxication, drowsiness or physical fatigue, and of 77.8–98.72% in recognizing the user's activities (reading, typing or watching video). Practical relevance: The results can be applied in security and remote monitoring applications.


2020 ◽  
Vol 9 (5) ◽  
pp. 1890-1898 ◽  
Author(s):  
Esmeralda C. Djamal ◽  
Rizkia I. Ramadhan ◽  
Miranti I. Mandasari ◽  
Deswara Djajasasmita

Post-stroke patients need ongoing rehabilitation to restore dysfunction caused by an attack so that a monitoring device is required. EEG signals reflect electrical activity in the brain, which also informs the condition of post-stroke patient recovery. However, the EEG signal processing model needs to provide information on the post-stroke state. The development of deep learning allows it to be applied to the identification of post-stroke patients. This study proposed a method for identifying post-stroke patients using convolutional neural networks (CNN). Wavelet is used for EEG signal information extraction as a feature of machine learning, which reflects the condition of post-stroke patients. This feature is Delta, Alpha, Beta, Theta, and Mu waves. Moreover, the five waves, amplitude features are also added according to the characteristics of the post-stroke EEG signal. The results showed that the feature configuration is essential as distinguish. The accuracy of the testing data was 90% with amplitude and Beta features compared to 70% without amplitude or Beta. The experimental results also showed that adaptive moment estimation (Adam) optimization model was more stable compared to Stochastic gradient descent (SGD). But SGD can provide higher accuracy than the Adam model. 


2021 ◽  
Author(s):  
Giulia Maria Mattia ◽  
Federico Nemmi ◽  
Edouard Villain ◽  
Marie-Véronique Le Lann ◽  
Xavier Franceries ◽  
...  

Convolutional neural networks are gradually being recognized in the neuroimaging community as a powerful tool for image analysis. In the present study, we tested the ability of 3D convolutional neural networks to discriminate between whole-brain parametric maps obtained from diffusion-weighted magnetic resonance imaging. Original parametric maps were subjected to intensity-based region-specific alterations, to create altered maps. To analyze how position, size and intensity of altered regions affected the networks’ learning process, we generated monoregion and biregion maps by systematically modifying the size and intensity of one or two brain regions in each image. We assessed network performance over a range of intensity increases and combinations of maps, carrying out 10-fold cross-validation and using a hold-out set for testing. We then tested the networks trained with monoregion images on the corresponding biregion images and vice versa. Results showed an inversely proportional link between size and intensity for the monoregion networks, in that the larger the region, the smaller the increase in intensity needed to achieve good performances. Accuracy was better for biregion networks than for their monoregion counterparts, showing that altering more than one region in the brain can improve discrimination. Monoregion networks correctly detected their target region in biregion maps, whereas biregion networks could only detect one of the two target regions at most. Biregion networks therefore learned a more complex pattern that was absent from the monoregion images. This deep learning approach could be tailored to explore the behavior of other convolutional neural networks for other regions of interest. <br>


2018 ◽  
Vol 11 (3) ◽  
pp. 1457-1461 ◽  
Author(s):  
J. Seetha ◽  
S. Selvakumar Raja

The brain tumors, are the most common and aggressive disease, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of patients. Generally, various image techniques such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and ultrasound image are used to evaluate the tumor in a brain, lung, liver, breast, prostate…etc. Especially, in this work MRI images are used to diagnose tumor in the brain. However the huge amount of data generated by MRI scan thwarts manual classification of tumor vs non-tumor in a particular time. But it having some limitation (i.e) accurate quantitative measurements is provided for limited number of images. Hence trusted and automatic classification scheme are essential to prevent the death rate of human. The automatic brain tumor classification is very challenging task in large spatial and structural variability of surrounding region of brain tumor. In this work, automatic brain tumor detection is proposed by using Convolutional Neural Networks (CNN) classification. The deeper architecture design is performed by using small kernels. The weight of the neuron is given as small. Experimental results show that the CNN archives rate of 97.5% accuracy with low complexity and compared with the all other state of arts methods.


Author(s):  
Vandana Mohindru ◽  
Ashutosh Sharma ◽  
Apurv Mathur ◽  
Anuj Kumar Gupta

Background: The determination of tumor extent is a major challenging task in brain tumor planning and quantitative evaluation. Magnetic Resonance Imaging (MRI) is one of the non-intellectual technique has emerged as a front- line diagnostic tool for a brain tumor with non-ionizing radiation. <P> Objectives: In Brain tumors, Gliomas is the very basic tumor of the brain; they might be less aggressive or more aggressive in a patient with a life expectancy of not more than 2 years. Manual segmentation is time-consuming so we use a deep convolutional neural network to increase the performance is highly dependent on the operator&#039;s experience. <P> Methods: This paper proposed a fully automatic segmentation of brain tumors using deep convolutional neural networks. Further, it uses high-grade gliomas brain images from BRATS 2016 database. The suggested work achieve brain tumor segmentation using tensor flow, in which the anaconda frameworks are used to execute high-level mathematical functions. <P> Results: Hence, the research work segments brain tumors into four classes like edema, non-enhancing tumor, enhancing tumor and necrotic tumor. Brain tumor segmentation needs to separate healthy tissues from tumor regions such as advancing tumor, necrotic core, and surrounding edema. We have presented a process to segment 3D MRI image of a brain tumor into healthy and area where the tumor is present, including their separate sub-areas. We have applied an SVM based classification. Categorization is complete using a soft-margin SVM classifier. <P> Conclusion: We are using deep convolutional neural networks for presenting the brain tumor segmentation. Outcomes of the BRATS 2016 online judgment method assure us to increase the performance, accuracy, and speed with our best model. The fuzzy c-mean algorithm provides better accuracy and train on the SVM based classifier. We can achieve the finest performance and accuracy by using the novel two-pathway architecture i.e. encoder and decoder as well as the modeling local label that depends on stacking two CNN's


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Yaoda Xu ◽  
Maryam Vaziri-Pashkam

AbstractConvolutional neural networks (CNNs) are increasingly used to model human vision due to their high object categorization capabilities and general correspondence with human brain responses. Here we evaluate the performance of 14 different CNNs compared with human fMRI responses to natural and artificial images using representational similarity analysis. Despite the presence of some CNN-brain correspondence and CNNs’ impressive ability to fully capture lower level visual representation of real-world objects, we show that CNNs do not fully capture higher level visual representations of real-world objects, nor those of artificial objects, either at lower or higher levels of visual representations. The latter is particularly critical, as the processing of both real-world and artificial visual stimuli engages the same neural circuits. We report similar results regardless of differences in CNN architecture, training, or the presence of recurrent processing. This indicates some fundamental differences exist in how the brain and CNNs represent visual information.


Sign in / Sign up

Export Citation Format

Share Document