scholarly journals 3D U-Net improves automatic brain extraction for isotropic rat brain MRI data

2021 ◽  
Author(s):  
Li-Ming Hsu ◽  
Shuai Wang ◽  
Lindsay Walton ◽  
Tzu-Wen Winnie Wang ◽  
Sung-Ho Lee ◽  
...  

AbstractBrain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation (RATS), Pulse-Coupled Neural Network (PCNN), SHape descriptor selected External Regions after Morphologically filtering (SHERM), and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant methodological contributionWe proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.

2021 ◽  
Vol 15 ◽  
Author(s):  
Li-Ming Hsu ◽  
Shuai Wang ◽  
Lindsay Walton ◽  
Tzu-Wen Winnie Wang ◽  
Sung-Ho Lee ◽  
...  

Brain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation, Pulse-Coupled Neural Network, SHape descriptor selected External Regions after Morphologically filtering, and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, center-of-mass distance, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant Methodological Contribution: We proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.


2019 ◽  
Vol 9 (3) ◽  
pp. 569 ◽  
Author(s):  
Hyunho Hwang ◽  
Hafiz Zia Ur Rehman ◽  
Sungon Lee

Skull stripping in brain magnetic resonance imaging (MRI) is an essential step to analyze images of the brain. Although manual segmentation has the highest accuracy, it is a time-consuming task. Therefore, various automatic segmentation algorithms of the brain in MRI have been devised and proposed previously. However, there is still no method that solves the entire brain extraction problem satisfactorily for diverse datasets in a generic and robust way. To address these shortcomings of existing methods, we propose the use of a 3D-UNet for skull stripping in brain MRI. The 3D-UNet was recently proposed and has been widely used for volumetric segmentation in medical images due to its outstanding performance. It is an extended version of the previously proposed 2D-UNet, which is based on a deep learning network, specifically, the convolutional neural network. We evaluated 3D-UNet skull-stripping using a publicly available brain MRI dataset and compared the results with three existing methods (BSE, ROBEX, and Kleesiek’s method; BSE and ROBEX are two conventional methods, and Kleesiek’s method is based on deep learning). The 3D-UNet outperforms two typical methods and shows comparable results with the specific deep learning-based algorithm, exhibiting a mean Dice coefficient of 0.9903, a sensitivity of 0.9853, and a specificity of 0.9953.


2021 ◽  
Vol 23 (07) ◽  
pp. 516-529
Author(s):  
Reshma L ◽  
◽  
Sai Priya Nalluri ◽  
Priya R Sankpal ◽  
◽  
...  

In this paper, a user-friendly system has been developed which will provide the result of medical analysis of digital images like magnetization resonance of image scan of the brain for detection and classification of dementia. The small structural differences in the brain can slowly and gradually become a major disease like dementia. The progression of dementia can be slowed when identified early. Hence, this paper aims at developing a robust system for classification and identifying dementia at the earliest. The method used in this paper for initial disclosure and diagnosis of dementia is deep learning since it can give important results in a shorter period of time. Deep Learning methods such as K-means clustering, Pattern Recognition, and Multi-class Support Vector Machine (SVM) have been used to classify different stages of dementia. The goal of this study is to provide a user interface for deep learning-based dementia classification using brain magnetic resonance imaging data. The results show that the created method has an accuracy of 96% and may be utilized to detect people who have dementia or are in the early stages of dementia.


2021 ◽  
Author(s):  
Zilong Zeng ◽  
Tengda Zhao ◽  
Lianglong Sun ◽  
Yihe Zhang ◽  
Mingrui Xia ◽  
...  

Precise segmentation of infant brain MR images into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) is essential for studying neuroanatomical hallmarks of early brain development. However, for 6-month-old infants, the extremely low-intensity contrast caused by inherent myelination hinders accurate tissue segmentation. Existing convolutional neural networks (CNNs) based segmentation model for this task generally employ single-scale symmetric convolutions, which are inefficient for encoding the isointense tissue boundaries in limited samples of baby brain images. Here, we propose a 3D mixed-scale asymmetric convolutional segmentation network (3D-MASNet) framework for brain MR images of 6-month-old infant. We replaced the traditional convolutional layer of an existing to-be-trained network with a 3D mixed-scale convolution block consisting of asymmetric kernels (MixACB) during the training phase and then equivalently converted it into the original network. Five canonical CNN segmentation models were evaluated using both T1- and T2-weighted images of 23 6-month-old infants from iSeg-2019 datasets, which contained manual labels as ground truth. MixACB significantly enhanced the average accuracy of all five models and obtained the largest improvement in the fully convolutional network model (CC-3D-FCN) and the highest performance in the Dense U-Net model. This approach further obtained Dice coefficient accuracies of 0.931, 0.912, and 0.961 in GM, WM, and CSF, respectively, ranking first among 30 teams on the validation dataset of the iSeg-2019 Grand Challenge. Thus, the proposed 3D-MASNet can improve the accuracy of existing CNNs-based segmentation models as a plug-and-play solution that offers a promising technique for future infant brain MRI studies.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Adekanmi Adeyinka Adegun ◽  
Serestina Viriri ◽  
Roseline Oluwaseun Ogundokun

Localization of region of interest (ROI) is paramount to the analysis of medical images to assist in the identification and detection of diseases. In this research, we explore the application of a deep learning approach in the analysis of some medical images. Traditional methods have been restricted due to the coarse and granulated appearance of most of these images. Recently, deep learning techniques have produced promising results in the segmentation of medical images for the diagnosis of diseases. This research experiments on medical images using a robust deep learning architecture based on the Fully Convolutional Network- (FCN-) UNET method for the segmentation of three samples of medical images such as skin lesion, retinal images, and brain Magnetic Resonance Imaging (MRI) images. The proposed method can efficiently identify the ROI on these images to assist in the diagnosis of diseases such as skin cancer, eye defects and diabetes, and brain tumor. This system was evaluated on publicly available databases such as the International Symposium on Biomedical Imaging (ISBI) skin lesion images, retina images, and brain tumor datasets with over 90% accuracy and dice coefficient.


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Liyao Song ◽  
Quan Wang ◽  
Ting Liu ◽  
Haiwei Li ◽  
Jiancun Fan ◽  
...  

AbstractSpatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Due to the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combined the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI 2D data for evaluation. The experimental results have shown that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.


2019 ◽  
Vol 9 (18) ◽  
pp. 3849 ◽  
Author(s):  
Hiroyuki Sugimori ◽  
Masashi Kawakami

Recently, deep learning technology has been applied to medical images. This study aimed to create a detector able to automatically detect an anatomical structure presented in a brain magnetic resonance imaging (MRI) scan to draw a standard line. A total of 1200 brain sagittal MRI scans were used for training and validation. Two sizes of regions of interest (ROIs) were drawn on each anatomical structure measuring 64 × 64 pixels and 32 × 32 pixels, respectively. Data augmentation was applied to these ROIs. The faster region-based convolutional neural network was used as the network model for training. The detectors created were validated to evaluate the precision of detection. Anatomical structures detected by the model created were processed to draw the standard line. The average precision of anatomical detection, detection rate of the standard line, and accuracy rate of achieving a correct drawing were evaluated. For the 64 × 64-pixel ROI, the mean average precision achieved a result of 0.76 ± 0.04, which was higher than the outcome achieved with the 32 × 32-pixel ROI. Moreover, the detection and accuracy rates of the angle of difference at 10 degrees for the orbitomeatal line were 93.3 ± 5.2 and 76.7 ± 11.0, respectively. The automatic detection of a reference line for brain MRI can help technologists improve this examination.


2020 ◽  
Author(s):  
Liyao Song ◽  
Quan Wang ◽  
Ting Liu ◽  
Haiwei Li ◽  
Jiancun Fan ◽  
...  

Abstract Spatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Given that the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combine the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI data for evaluation. The experimental results show that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.


2014 ◽  
Vol 221 ◽  
pp. 175-182 ◽  
Author(s):  
Ipek Oguz ◽  
Honghai Zhang ◽  
Ashley Rumple ◽  
Milan Sonka

2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Si Zhang ◽  
Hanghang Tong ◽  
Jiejun Xu ◽  
Ross Maciejewski

Abstract Graphs naturally appear in numerous application domains, ranging from social analysis, bioinformatics to computer vision. The unique capability of graphs enables capturing the structural relations among data, and thus allows to harvest more insights compared to analyzing data in isolation. However, it is often very challenging to solve the learning problems on graphs, because (1) many types of data are not originally structured as graphs, such as images and text data, and (2) for graph-structured data, the underlying connectivity patterns are often complex and diverse. On the other hand, the representation learning has achieved great successes in many areas. Thereby, a potential solution is to learn the representation of graphs in a low-dimensional Euclidean space, such that the graph properties can be preserved. Although tremendous efforts have been made to address the graph representation learning problem, many of them still suffer from their shallow learning mechanisms. Deep learning models on graphs (e.g., graph neural networks) have recently emerged in machine learning and other related areas, and demonstrated the superior performance in various problems. In this survey, despite numerous types of graph neural networks, we conduct a comprehensive review specifically on the emerging field of graph convolutional networks, which is one of the most prominent graph deep learning models. First, we group the existing graph convolutional network models into two categories based on the types of convolutions and highlight some graph convolutional network models in details. Then, we categorize different graph convolutional networks according to the areas of their applications. Finally, we present several open challenges in this area and discuss potential directions for future research.


Sign in / Sign up

Export Citation Format

Share Document