scholarly journals Automated and robust organ segmentation for 3D-based internal dose calculation

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mahmood Nazari ◽  
Luis David Jiménez-Franco ◽  
Michael Schroeder ◽  
Andreas Kluge ◽  
Marcus Bronzel ◽  
...  

Abstract Purpose In this work, we address image segmentation in the scope of dosimetry using deep learning and make three main contributions: (a) to extend and optimize the architecture of an existing convolutional neural network (CNN) in order to obtain a fast, robust and accurate computed tomography (CT)-based organ segmentation method for kidneys and livers; (b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; and (c) to evaluate dosimetry results obtained using automated organ segmentation in comparison with manual segmentation done by two independent experts. Methods We adapted a performant deep learning approach using CT-images to delineate organ boundaries with sufficiently high accuracy and adequate processing time. The segmented organs were consequently used as binary masks for further convolution with a point spread function to retrieve the activity values from quantitatively reconstructed SPECT images for “volumetric”/3D dosimetry. The resulting activities were used to perform dosimetry calculations with the kidneys as source organs. Results The computational expense of the algorithm was sufficient for clinical daily routine, required minimum pre-processing and performed with acceptable accuracy a Dice coefficient of $$93\%$$ 93 % for liver segmentation and of $$94\%$$ 94 % for kidney segmentation, respectively. In addition, kidney self-absorbed doses calculated using automated segmentation differed by $$7\%$$ 7 % from dosimetry performed by two medical physicists in 8 patients. Conclusion The proposed approach may accelerate volumetric dosimetry of kidneys in molecular radiotherapy with 177Lu-labelled radiopharmaceuticals such as 177Lu-DOTATOC. However, even though a fully automated segmentation methodology based on CT images accelerates organ segmentation and performs with high accuracy, it does not remove the need for supervision and corrections by experts, mostly due to misalignments in the co-registration between SPECT and CT images. Trial registration EudraCT, 2016-001897-13. Registered 26.04.2016, www.clinicaltrialsregister.eu/ctr-search/search?query=2016-001897-13.

2021 ◽  
Author(s):  
Mahmood Nazari ◽  
Luis David Jimenez-Franco ◽  
Michael Schroeder ◽  
Andreas Kluge ◽  
Marcus Bronzel ◽  
...  

Abstract Purpose: In this work we address image segmentation within dosimetry using deep learning and make three main contributions: a) to extend and op- timize the architecture of an existing Convolutional Neural Network (CNN) in order to obtain a fast, robust and accurate Computed Tomography (CT) based organ segmentation method for kidneys and livers; b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; c) to evaluate dosimetry results obtained using automated organ segmentation in comparison to manual segmentation done by two independent experts. Methods: We adapted a performant deep learning approach using CT-images to calculate organ boundaries with sufficiently high and adequate accuracy and processing time. The segmented organs were consequently used as binary masks for further convolution with a point spread function to retrieve the ac- tivity values from quantitatively reconstructed SPECT images for ”volumet- ric”/3D dosimetry. The retrieved activities were used to perform dosimetry calculations considering the kidneys as source organ. Results: The computational expenses of the algorithm was adequate enough to be used in clinical daily routine, required minimum pre-processing and per- formed within an acceptable accuracy of 93 . 4% for liver segmentation and of 94 . 1% for kidney segmentation. Additionally, kidney self-absorbed doses calcu- lated using automated segmentation differed 6 . 3% from dosimetries performed by two medical physicists in 8 patients. Conclusion: The proposed approach may accelerate volumetric dosimetry of kidneys in molecular radiotherapy with 177Lu-labelled radio-pharmaceuticals such as 177Lu-DOTATOC. However, even though a fully automated segmen- tation methodology based on CT images accelerates the organ segmentation and performs with high accuracy, it does not remove the need for supervision and corrections by experts, mostly due to misalignments in the co-registration between SPECT and CT images.Trial registration: EudraCT, 2016-001897-13. Registered 26.04.2016, www.clinicaltrialsregister.eu/ctr-search/search?query=2016-001897-13


2020 ◽  
Vol 41 (6) ◽  
pp. 1061-1069
Author(s):  
L. Umapathy ◽  
B. Winegar ◽  
L. MacKinnon ◽  
M. Hill ◽  
M.I. Altbach ◽  
...  

2021 ◽  
Author(s):  
Evropi Toulkeridou ◽  
Carlos Enrique Gutierrez ◽  
Daniel Baum ◽  
Kenji Doya ◽  
Evan P Economo

Three-dimensional (3D) imaging, such as micro-computed tomography (micro-CT), is increasingly being used by organismal biologists for precise and comprehensive anatomical characterization. However, the segmentation of anatomical structures remains a bottleneck in research, often requiring tedious manual work. Here, we propose a pipeline for the fully-automated segmentation of anatomical structures in micro-CT images utilizing state-of-the-art deep learning methods, selecting the ant brain as a testcase. We implemented the U-Net architecture for 2D image segmentation for our convolutional neural network (CNN), combined with pixel-island detection. For training and validation of the network, we assembled a dataset of semi-manually segmented brain images of 94 ant species. The trained network predicted the brain area in ant images fast and accurately; its performance tested on validation sets showed good agreement between the prediction and the target, scoring 80% Intersection over Union(IoU) and 90% Dice Coefficient (F1) accuracy. While manual segmentation usually takes many hours for each brain, the trained network takes only a few minutes.Furthermore, our network is generalizable for segmenting the whole neural system in full-body scans, and works in tests on distantly related and morphologically divergent insects (e.g., fruit flies). The latter suggest that methods like the one presented here generally apply across diverse taxa. Our method makes the construction of segmented maps and the morphological quantification of different species more efficient and scalable to large datasets, a step toward a big data approach to organismal anatomy.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
A.C Chandrashekar ◽  
A.H Handa ◽  
N.S Shivakumar ◽  
P.L Lapolla ◽  
V.G Grau ◽  
...  

Abstract Background Existing methods to reconstruct vascular structures from a computed tomography (CT) angiogram rely on injection of intravenous contrast to enhance the radio-density within the vessel lumen. Pathological changes present within the blood lumen, vessel wall or a combination of both prevent accurate 3D reconstruction. In the example of aortic aneurysmal (AAA) disease, a blood clot or thrombus adherent to the aortic wall within the expanding aneurysmal sac is present in 95% of cases. These deformations prevent the automatic extraction of vital clinically relevant information by current methods. Objectives In this study, we utilised deep learning segmentation methods to establish a high-throughput and automated segmentation pipeline for pathological blood vessels (ex. Aortic Aneurysm) in CT images acquired with or without the use of a contrast agent. Methods Twenty-six patients with paired non-contrast and contrast-enhanced CT images were randomly selected from an ethically-approved ongoing study (Ethics Ref 13/SC/0250), manually annotated and used for model training and evaluation (13/13). Data augmentation methods were implemented to diversify the training data set in a ratio of 10:1. We utilised a 3D U-Net with attention gating for both the aortic region-of-interest (ROI) detection and segmentation tasks. Trained architectures were evaluated using the DICE similarity score. Results Inter- and Intra- observer analysis supports the accuracy of the manual segmentations used for model training (intra-class correlation coefficient, “ICC” = 0.995 and 1.00, respective. P<0.001 for both). The performance of our Attention-based U-Net (DICE score: 94.8±0.5%) in extracting both the inner lumen and the outer wall of the aortic aneurysm from CT angiograms (CTA) was compared against a generic 3-D U-Net (DICE score: 89.5±0.6%) and displayed superior results (p<0.01). Fig 1A depicts the implementation of this network architecture within the aortic segmentation pipeline (automated ROI detection and aortic segmentation). This pipeline has allowed accurate and efficient extraction of the entire aortic volume from both contrast-enhanced CTA (DICE score: 95.3±0.6%) and non-contrast CT (DICE score: 93.2±0.7%) images. Fig 1B illustrates the model output alongside the labelled ground truth segmentation for the pathological aneurysmal region; only minor differences are visually discernible (coloured boxes). Conclusion We developed a novel automated pipeline for high resolution reconstruction of blood vessels using deep learning approaches. This pipeline enables automatic extraction of morphologic features of blood vessels and can be applied for research and potentially for clinical use. Automated Segmentation of Blood Vessels Funding Acknowledgement Type of funding source: Foundation. Main funding source(s): University of Oxford Medical Research Fund, John Fell Fund


2017 ◽  
Vol 1 (3) ◽  
pp. 54
Author(s):  
BOUKELLOUZ Wafa ◽  
MOUSSAOUI Abdelouahab

Background: Since the last decades, research have been oriented towards an MRI-alone radiation treatment planning (RTP), where MRI is used as the primary modality for imaging, delineation and dose calculation by assigning to it the needed electron density (ED) information. The idea is to create a computed tomography (CT) image or so-called pseudo-CT from MRI data. In this paper, we review and classify methods for creating pseudo-CT images from MRI data. Each class of methods is explained and a group of works in the literature is presented in detail with statistical performance. We discuss the advantages, drawbacks and limitations of each class of methods. Methods: We classified most recent works in deriving a pseudo-CT from MR images into four classes: segmentation-based, intensity-based, atlas-based and hybrid methods. We based the classification on the general technique applied in the approach. Results: Most of research focused on the brain and the pelvis regions. The mean absolute error (MAE) ranged from 80 HU to 137 HU and from 36.4 HU to 74 HU for the brain and pelvis, respectively. In addition, an interest in the Dixon MR sequence is increasing since it has the advantage of producing multiple contrast images with a single acquisition. Conclusion: Radiation therapy field is emerging towards the generalization of MRI-only RT thanks to the advances in techniques for generation of pseudo-CT images. However, a benchmark is needed to set in common performance metrics to assess the quality of the generated pseudo-CT and judge on the efficiency of a certain method.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sign in / Sign up

Export Citation Format

Share Document