scholarly journals 3D U-Net Improves Automatic Brain Extraction for Isotropic Rat Brain Magnetic Resonance Imaging Data

2021 ◽  
Vol 15 ◽  
Author(s):  
Li-Ming Hsu ◽  
Shuai Wang ◽  
Lindsay Walton ◽  
Tzu-Wen Winnie Wang ◽  
Sung-Ho Lee ◽  
...  

Brain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation, Pulse-Coupled Neural Network, SHape descriptor selected External Regions after Morphologically filtering, and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, center-of-mass distance, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant Methodological Contribution: We proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.

2021 ◽  
Author(s):  
Li-Ming Hsu ◽  
Shuai Wang ◽  
Lindsay Walton ◽  
Tzu-Wen Winnie Wang ◽  
Sung-Ho Lee ◽  
...  

AbstractBrain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation (RATS), Pulse-Coupled Neural Network (PCNN), SHape descriptor selected External Regions after Morphologically filtering (SHERM), and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant methodological contributionWe proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.


2019 ◽  
Vol 9 (18) ◽  
pp. 3849 ◽  
Author(s):  
Hiroyuki Sugimori ◽  
Masashi Kawakami

Recently, deep learning technology has been applied to medical images. This study aimed to create a detector able to automatically detect an anatomical structure presented in a brain magnetic resonance imaging (MRI) scan to draw a standard line. A total of 1200 brain sagittal MRI scans were used for training and validation. Two sizes of regions of interest (ROIs) were drawn on each anatomical structure measuring 64 × 64 pixels and 32 × 32 pixels, respectively. Data augmentation was applied to these ROIs. The faster region-based convolutional neural network was used as the network model for training. The detectors created were validated to evaluate the precision of detection. Anatomical structures detected by the model created were processed to draw the standard line. The average precision of anatomical detection, detection rate of the standard line, and accuracy rate of achieving a correct drawing were evaluated. For the 64 × 64-pixel ROI, the mean average precision achieved a result of 0.76 ± 0.04, which was higher than the outcome achieved with the 32 × 32-pixel ROI. Moreover, the detection and accuracy rates of the angle of difference at 10 degrees for the orbitomeatal line were 93.3 ± 5.2 and 76.7 ± 11.0, respectively. The automatic detection of a reference line for brain MRI can help technologists improve this examination.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jae-Young Kim ◽  
Dongwook Kim ◽  
Kug Jin Jeon ◽  
Hwiyoung Kim ◽  
Jong-Ki Huh

AbstractThe goal of this study was to develop a deep learning-based algorithm to predict temporomandibular joint (TMJ) disc perforation based on the findings of magnetic resonance imaging (MRI) and to validate its performance through comparison with previously reported results. The study objects were obtained by reviewing medical records from January 2005 to June 2018. 299 joints from 289 patients were divided into perforated and non-perforated groups based on the existence of disc perforation confirmed during surgery. Experienced observers interpreted the TMJ MRI images to extract features. Data containing those features were applied to build and validate prediction models using random forest and multilayer perceptron (MLP) techniques, the latter using the Keras framework, a recent deep learning architecture. The area under the receiver operating characteristic (ROC) curve (AUC) was used to compare the performances of the models. MLP produced the best performance (AUC 0.940), followed by random forest (AUC 0.918) and disc shape alone (AUC 0.791). The MLP and random forest were also superior to previously reported results using MRI (AUC 0.808) and MRI-based nomogram (AUC 0.889). Implementing deep learning showed superior performance in predicting disc perforation in TMJ compared to conventional methods and previous reports.


Sign in / Sign up

Export Citation Format

Share Document