scholarly journals A deep learning toolbox for automatic segmentation of subcortical limbic structures from MRI images

NeuroImage ◽  
2021 ◽  
Vol 244 ◽  
pp. 118610
Author(s):  
Douglas N. Greve ◽  
Benjamin Billot ◽  
Devani Cordero ◽  
Andrew Hoopes ◽  
Malte Hoffmann ◽  
...  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


2021 ◽  
Vol 11 (12) ◽  
pp. 5488
Author(s):  
Wei Ping Hsia ◽  
Siu Lun Tse ◽  
Chia Jen Chang ◽  
Yu Len Huang

The purpose of this article is to evaluate the accuracy of the optical coherence tomography (OCT) measurement of choroidal thickness in healthy eyes using a deep-learning method with the Mask R-CNN model. Thirty EDI-OCT of thirty patients were enrolled. A mask region-based convolutional neural network (Mask R-CNN) model composed of deep residual network (ResNet) and feature pyramid networks (FPNs) with standard convolution and fully connected heads for mask and box prediction, respectively, was used to automatically depict the choroid layer. The average choroidal thickness and subfoveal choroidal thickness were measured. The results of this study showed that ResNet 50 layers deep (R50) model and ResNet 101 layers deep (R101). R101 U R50 (OR model) demonstrated the best accuracy with an average error of 4.85 pixels and 4.86 pixels, respectively. The R101 ∩ R50 (AND model) took the least time with an average execution time of 4.6 s. Mask-RCNN models showed a good prediction rate of choroidal layer with accuracy rates of 90% and 89.9% for average choroidal thickness and average subfoveal choroidal thickness, respectively. In conclusion, the deep-learning method using the Mask-RCNN model provides a faster and accurate measurement of choroidal thickness. Comparing with manual delineation, it provides better effectiveness, which is feasible for clinical application and larger scale of research on choroid.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi202-vi203
Author(s):  
Alvaro Sandino ◽  
Ruchika Verma ◽  
Yijiang Chen ◽  
David Becerra ◽  
Eduardo Romero ◽  
...  

Abstract PURPOSE Glioblastoma is a highly heterogeneous brain tumor. Primary treatment for glioblastoma involves maximally-safe surgical resection. After surgery, resected tissue slides are visually analyzed by neuro-pathologists to identify distinct histological hallmarks characterizing glioblastoma including high cellularity, necrosis, and vascular proliferation. In this work, we present a hierarchical deep learning-based strategy to automatically segment distinct Glioblastoma niches including necrosis, cellular tumor, and hyperplastic blood vessels, on digitized histopathology slides. METHODS We employed the IvyGap cohort for which Hematoxylin and eosin (H&E) slides (digitized at 20X magnification) from n=41 glioblastoma patients were available. Additionally, expert-driven segmentations of cellular tumor, necrosis, and hyperplastic blood vessels (along with other histological attributes) were made available. We randomly employed n=120 slides from 29 patients for training, n=38 slides from 6 cases for validation, and n=30 slides from 6 patients to feed our deep learning model based on Residual Network architecture (ResNet-50). ~2,000 patches of 224x224 pixels were sampled for every slide. Our hierarchical model included first segmenting necrosis from non-necrotic (i.e. cellular tumor) regions, and then from the regions segmented as non-necrotic, identifying hyperplastic blood-vessels from the rest of the cellular tumor. RESULTS Our model achieved a training accuracy of 94%, and a testing accuracy of 88% with an area under the curve (AUC) of 92% in distinguishing necrosis from non-necrotic (i.e. cellular tumor) regions. Similarly, we obtained a training accuracy of 78%, and a testing accuracy of 87% (with an AUC of 94%) in identifying hyperplastic blood vessels from the rest of the cellular tumor. CONCLUSION We developed a reliable hierarchical segmentation model for automatic segmentation of necrotic, cellular tumor, and hyperplastic blood vessels on digitized H&E-stained Glioblastoma tissue images. Future work will involve extension of our model for segmentation of pseudopalisading patterns and microvascular proliferation.


2021 ◽  
Author(s):  
Sang-Heon Lim ◽  
Young Jae Kim ◽  
Yeon-Ho Park ◽  
Doojin Kim ◽  
Kwang Gi Kim ◽  
...  

Abstract Pancreas segmentation is necessary for observing lesions, analyzing anatomical structures, and predicting patient prognosis. Therefore, various studies have designed segmentation models based on convolutional neural networks for pancreas segmentation. However, the deep learning approach is limited by a lack of data, and studies conducted on a large computed tomography dataset are scarce. Therefore, this study aims to perform deep-learning-based semantic segmentation on 1,006 participants and evaluate the automatic segmentation performance of the pancreas via four individual three-dimensional segmentation networks. In this study, we performed internal validation with 1,006 patients and external validation using the Cancer Imaging Archive (TCIA) pancreas dataset. We obtained mean precision, recall, and dice similarity coefficients of 0.869, 0.842, and 0.842, respectively, for internal validation via a relevant approach among the four deep learning networks. Using the external dataset, the deep learning network achieved mean precision, recall, and dice similarity coefficients of 0.779, 0.749, and 0.735, respectively. We expect that generalized deep-learning-based systems can assist clinical decisions by providing accurate pancreatic segmentation and quantitative information of the pancreas for abdominal computed tomography.


2021 ◽  
Author(s):  
Wing Keung Cheung ◽  
Robert Bell ◽  
Arjun Nair ◽  
Leon Menezies ◽  
Riyaz Patel ◽  
...  

AbstractA fully automatic two-dimensional Unet model is proposed to segment aorta and coronary arteries in computed tomography images. Two models are trained to segment two regions of interest, (1) the aorta and the coronary arteries or (2) the coronary arteries alone. Our method achieves 91.20% and 88.80% dice similarity coefficient accuracy on regions of interest 1 and 2 respectively. Compared with a semi-automatic segmentation method, our model performs better when segmenting the coronary arteries alone. The performance of the proposed method is comparable to existing published two-dimensional or three-dimensional deep learning models. Furthermore, the algorithmic and graphical processing unit memory efficiencies are maintained such that the model can be deployed within hospital computer networks where graphical processing units are typically not available.


Sign in / Sign up

Export Citation Format

Share Document