Automated segmentation framework of lung gross tumor volumes on 3D planning CT images using dense V-Net deep learning

Author(s):  
Risa Nakano ◽  
Hidetaka Arimura ◽  
Mohammad Haekal ◽  
Saiji Ohga
2020 ◽  
Vol 41 (6) ◽  
pp. 1061-1069
Author(s):  
L. Umapathy ◽  
B. Winegar ◽  
L. MacKinnon ◽  
M. Hill ◽  
M.I. Altbach ◽  
...  

2021 ◽  
Author(s):  
Evropi Toulkeridou ◽  
Carlos Enrique Gutierrez ◽  
Daniel Baum ◽  
Kenji Doya ◽  
Evan P Economo

Three-dimensional (3D) imaging, such as micro-computed tomography (micro-CT), is increasingly being used by organismal biologists for precise and comprehensive anatomical characterization. However, the segmentation of anatomical structures remains a bottleneck in research, often requiring tedious manual work. Here, we propose a pipeline for the fully-automated segmentation of anatomical structures in micro-CT images utilizing state-of-the-art deep learning methods, selecting the ant brain as a testcase. We implemented the U-Net architecture for 2D image segmentation for our convolutional neural network (CNN), combined with pixel-island detection. For training and validation of the network, we assembled a dataset of semi-manually segmented brain images of 94 ant species. The trained network predicted the brain area in ant images fast and accurately; its performance tested on validation sets showed good agreement between the prediction and the target, scoring 80% Intersection over Union(IoU) and 90% Dice Coefficient (F1) accuracy. While manual segmentation usually takes many hours for each brain, the trained network takes only a few minutes.Furthermore, our network is generalizable for segmenting the whole neural system in full-body scans, and works in tests on distantly related and morphologically divergent insects (e.g., fruit flies). The latter suggest that methods like the one presented here generally apply across diverse taxa. Our method makes the construction of segmented maps and the morphological quantification of different species more efficient and scalable to large datasets, a step toward a big data approach to organismal anatomy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mahmood Nazari ◽  
Luis David Jiménez-Franco ◽  
Michael Schroeder ◽  
Andreas Kluge ◽  
Marcus Bronzel ◽  
...  

Abstract Purpose In this work, we address image segmentation in the scope of dosimetry using deep learning and make three main contributions: (a) to extend and optimize the architecture of an existing convolutional neural network (CNN) in order to obtain a fast, robust and accurate computed tomography (CT)-based organ segmentation method for kidneys and livers; (b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; and (c) to evaluate dosimetry results obtained using automated organ segmentation in comparison with manual segmentation done by two independent experts. Methods We adapted a performant deep learning approach using CT-images to delineate organ boundaries with sufficiently high accuracy and adequate processing time. The segmented organs were consequently used as binary masks for further convolution with a point spread function to retrieve the activity values from quantitatively reconstructed SPECT images for “volumetric”/3D dosimetry. The resulting activities were used to perform dosimetry calculations with the kidneys as source organs. Results The computational expense of the algorithm was sufficient for clinical daily routine, required minimum pre-processing and performed with acceptable accuracy a Dice coefficient of $$93\%$$ 93 % for liver segmentation and of $$94\%$$ 94 % for kidney segmentation, respectively. In addition, kidney self-absorbed doses calculated using automated segmentation differed by $$7\%$$ 7 % from dosimetry performed by two medical physicists in 8 patients. Conclusion The proposed approach may accelerate volumetric dosimetry of kidneys in molecular radiotherapy with 177Lu-labelled radiopharmaceuticals such as 177Lu-DOTATOC. However, even though a fully automated segmentation methodology based on CT images accelerates organ segmentation and performs with high accuracy, it does not remove the need for supervision and corrections by experts, mostly due to misalignments in the co-registration between SPECT and CT images. Trial registration EudraCT, 2016-001897-13. Registered 26.04.2016, www.clinicaltrialsregister.eu/ctr-search/search?query=2016-001897-13.


2020 ◽  
Vol 197 ◽  
pp. 105685
Author(s):  
João Otávio Bandeira Diniz ◽  
Jonnison Lima Ferreira ◽  
Pedro Henrique Bandeira Diniz ◽  
Aristófanes Corrêa Silva ◽  
Anselmo Cardoso de Paiva

2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A.C Chandrashekar ◽  
A.H Handa ◽  
N.S Shivakumar ◽  
P.L Lapolla ◽  
V.G Grau ◽  
...  

Abstract Background Existing methods to reconstruct vascular structures from a computed tomography (CT) angiogram rely on injection of intravenous contrast to enhance the radio-density within the vessel lumen. Pathological changes present within the blood lumen, vessel wall or a combination of both prevent accurate 3D reconstruction. In the example of aortic aneurysmal (AAA) disease, a blood clot or thrombus adherent to the aortic wall within the expanding aneurysmal sac is present in 95% of cases. These deformations prevent the automatic extraction of vital clinically relevant information by current methods. Objectives In this study, we utilised deep learning segmentation methods to establish a high-throughput and automated segmentation pipeline for pathological blood vessels (ex. Aortic Aneurysm) in CT images acquired with or without the use of a contrast agent. Methods Twenty-six patients with paired non-contrast and contrast-enhanced CT images were randomly selected from an ethically-approved ongoing study (Ethics Ref 13/SC/0250), manually annotated and used for model training and evaluation (13/13). Data augmentation methods were implemented to diversify the training data set in a ratio of 10:1. We utilised a 3D U-Net with attention gating for both the aortic region-of-interest (ROI) detection and segmentation tasks. Trained architectures were evaluated using the DICE similarity score. Results Inter- and Intra- observer analysis supports the accuracy of the manual segmentations used for model training (intra-class correlation coefficient, “ICC” = 0.995 and 1.00, respective. P<0.001 for both). The performance of our Attention-based U-Net (DICE score: 94.8±0.5%) in extracting both the inner lumen and the outer wall of the aortic aneurysm from CT angiograms (CTA) was compared against a generic 3-D U-Net (DICE score: 89.5±0.6%) and displayed superior results (p<0.01). Fig 1A depicts the implementation of this network architecture within the aortic segmentation pipeline (automated ROI detection and aortic segmentation). This pipeline has allowed accurate and efficient extraction of the entire aortic volume from both contrast-enhanced CTA (DICE score: 95.3±0.6%) and non-contrast CT (DICE score: 93.2±0.7%) images. Fig 1B illustrates the model output alongside the labelled ground truth segmentation for the pathological aneurysmal region; only minor differences are visually discernible (coloured boxes). Conclusion We developed a novel automated pipeline for high resolution reconstruction of blood vessels using deep learning approaches. This pipeline enables automatic extraction of morphologic features of blood vessels and can be applied for research and potentially for clinical use. Automated Segmentation of Blood Vessels Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): University of Oxford Medical Research Fund, John Fell Fund


Sign in / Sign up

Export Citation Format

Share Document