scholarly journals Automated and Robust Organ Segmentation for 3D-based Internal dose Calculation

Author(s):  
Mahmood Nazari ◽  
Luis David Jimenez-Franco ◽  
Michael Schroeder ◽  
Andreas Kluge ◽  
Marcus Bronzel ◽  
...  

Abstract Purpose: In this work we address image segmentation within dosimetry using deep learning and make three main contributions: a) to extend and op- timize the architecture of an existing Convolutional Neural Network (CNN) in order to obtain a fast, robust and accurate Computed Tomography (CT) based organ segmentation method for kidneys and livers; b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; c) to evaluate dosimetry results obtained using automated organ segmentation in comparison to manual segmentation done by two independent experts. Methods: We adapted a performant deep learning approach using CT-images to calculate organ boundaries with sufficiently high and adequate accuracy and processing time. The segmented organs were consequently used as binary masks for further convolution with a point spread function to retrieve the ac- tivity values from quantitatively reconstructed SPECT images for ”volumet- ric”/3D dosimetry. The retrieved activities were used to perform dosimetry calculations considering the kidneys as source organ. Results: The computational expenses of the algorithm was adequate enough to be used in clinical daily routine, required minimum pre-processing and per- formed within an acceptable accuracy of 93 . 4% for liver segmentation and of 94 . 1% for kidney segmentation. Additionally, kidney self-absorbed doses calcu- lated using automated segmentation differed 6 . 3% from dosimetries performed by two medical physicists in 8 patients. Conclusion: The proposed approach may accelerate volumetric dosimetry of kidneys in molecular radiotherapy with 177Lu-labelled radio-pharmaceuticals such as 177Lu-DOTATOC. However, even though a fully automated segmen- tation methodology based on CT images accelerates the organ segmentation and performs with high accuracy, it does not remove the need for supervision and corrections by experts, mostly due to misalignments in the co-registration between SPECT and CT images.Trial registration: EudraCT, 2016-001897-13. Registered 26.04.2016, www.clinicaltrialsregister.eu/ctr-search/search?query=2016-001897-13

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mahmood Nazari ◽  
Luis David Jiménez-Franco ◽  
Michael Schroeder ◽  
Andreas Kluge ◽  
Marcus Bronzel ◽  
...  

Abstract Purpose In this work, we address image segmentation in the scope of dosimetry using deep learning and make three main contributions: (a) to extend and optimize the architecture of an existing convolutional neural network (CNN) in order to obtain a fast, robust and accurate computed tomography (CT)-based organ segmentation method for kidneys and livers; (b) to train the CNN with an inhomogeneous set of CT scans and validate the CNN for daily dosimetry; and (c) to evaluate dosimetry results obtained using automated organ segmentation in comparison with manual segmentation done by two independent experts. Methods We adapted a performant deep learning approach using CT-images to delineate organ boundaries with sufficiently high accuracy and adequate processing time. The segmented organs were consequently used as binary masks for further convolution with a point spread function to retrieve the activity values from quantitatively reconstructed SPECT images for “volumetric”/3D dosimetry. The resulting activities were used to perform dosimetry calculations with the kidneys as source organs. Results The computational expense of the algorithm was sufficient for clinical daily routine, required minimum pre-processing and performed with acceptable accuracy a Dice coefficient of $$93\%$$ 93 % for liver segmentation and of $$94\%$$ 94 % for kidney segmentation, respectively. In addition, kidney self-absorbed doses calculated using automated segmentation differed by $$7\%$$ 7 % from dosimetry performed by two medical physicists in 8 patients. Conclusion The proposed approach may accelerate volumetric dosimetry of kidneys in molecular radiotherapy with 177Lu-labelled radiopharmaceuticals such as 177Lu-DOTATOC. However, even though a fully automated segmentation methodology based on CT images accelerates organ segmentation and performs with high accuracy, it does not remove the need for supervision and corrections by experts, mostly due to misalignments in the co-registration between SPECT and CT images. Trial registration EudraCT, 2016-001897-13. Registered 26.04.2016, www.clinicaltrialsregister.eu/ctr-search/search?query=2016-001897-13.


Author(s):  
P.Jagadeesh , Et. al.

The detection of tumor pixels in lung images is complex task due to its low contrast property. Hence, this paper uses deep learning architectures for both the detection and diagnosis of lung tumors in Computer Tomography (CT) images. In this article, the tumors are detected in lung CT images using Convolutional Neural Networks (CNN) architecture with the help of data augmentation methods. This proposed CNN architecture classifies the lung images into two categories as tumor images and normal images. Then, the segmentation method is used to segment the tumor pixels in the lung CT images and the segmented tumor regions are classified into either mild or severe using proposed CNN architecture.


2017 ◽  
Vol 1 (3) ◽  
pp. 54
Author(s):  
BOUKELLOUZ Wafa ◽  
MOUSSAOUI Abdelouahab

Background: Since the last decades, research have been oriented towards an MRI-alone radiation treatment planning (RTP), where MRI is used as the primary modality for imaging, delineation and dose calculation by assigning to it the needed electron density (ED) information. The idea is to create a computed tomography (CT) image or so-called pseudo-CT from MRI data. In this paper, we review and classify methods for creating pseudo-CT images from MRI data. Each class of methods is explained and a group of works in the literature is presented in detail with statistical performance. We discuss the advantages, drawbacks and limitations of each class of methods. Methods: We classified most recent works in deriving a pseudo-CT from MR images into four classes: segmentation-based, intensity-based, atlas-based and hybrid methods. We based the classification on the general technique applied in the approach. Results: Most of research focused on the brain and the pelvis regions. The mean absolute error (MAE) ranged from 80 HU to 137 HU and from 36.4 HU to 74 HU for the brain and pelvis, respectively. In addition, an interest in the Dixon MR sequence is increasing since it has the advantage of producing multiple contrast images with a single acquisition. Conclusion: Radiation therapy field is emerging towards the generalization of MRI-only RT thanks to the advances in techniques for generation of pseudo-CT images. However, a benchmark is needed to set in common performance metrics to assess the quality of the generated pseudo-CT and judge on the efficiency of a certain method.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.


Sign in / Sign up

Export Citation Format

Share Document