Clinically applicable deep learning framework for organs at risk delineation in CT images

2019 ◽  
Vol 1 (10) ◽  
pp. 480-491 ◽  
Author(s):  
Hao Tang ◽  
Xuming Chen ◽  
Yang Liu ◽  
Zhipeng Lu ◽  
Junhua You ◽  
...  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Weijun Chen ◽  
Cheng Wang ◽  
Wenming Zhan ◽  
Yongshi Jia ◽  
Fangfang Ruan ◽  
...  

AbstractRadiotherapy requires the target area and the organs at risk to be contoured on the CT image of the patient. During the process of organs-at-Risk (OAR) of the chest and abdomen, the doctor needs to contour at each CT image. The delineations of large and varied shapes are time-consuming and laborious. This study aims to evaluate the results of two automatic contouring softwares on OARs definition of CT images of lung cancer and rectal cancer patients. The CT images of 15 patients with rectal cancer and 15 patients with lung cancer were selected separately, and the organs at risk were manually contoured by experienced physicians as reference structures. And then the same datasets were automatically contoured based on AiContour (version 3.1.8.0, Manufactured by Linking MED, Beijing, China) and Raystation (version 4.7.5.4, Manufactured by Raysearch, Stockholm, Sweden) respectively. Deep learning auto-segmentations and Atlas were respectively performed with AiContour and Raystation. Overlap index (OI), Dice similarity index (DSC) and Volume difference (Dv) were evaluated based on the auto-contours, and independent-sample t-test analysis is applied to the results. The results of deep learning auto-segmentations on OI and DSC were better than that of Atlas with statistical difference. There was no significant difference in Dv between the results of two software. With deep learning auto-segmentations, auto-contouring results of most organs in the chest and abdomen are good, and with slight modification, it can meet the clinical requirements for planning. With Atlas, auto-contouring results in most OAR is not as good as deep learning auto-segmentations, and only the auto-contouring results of some organs can be used clinically after modification.


2018 ◽  
Vol 4 (5) ◽  
pp. 055003 ◽  
Author(s):  
Samaneh Kazemifar ◽  
Anjali Balagopal ◽  
Dan Nguyen ◽  
Sarah McGuire ◽  
Raquibul Hannan ◽  
...  

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2019 ◽  
Vol 104 (3) ◽  
pp. 677-684 ◽  
Author(s):  
Ward van Rooij ◽  
Max Dahele ◽  
Hugo Ribeiro Brandao ◽  
Alexander R. Delaney ◽  
Berend J. Slotman ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 268
Author(s):  
Yeganeh Jalali ◽  
Mansoor Fateh ◽  
Mohsen Rezvani ◽  
Vahid Abolghasemi ◽  
Mohammad Hossein Anisi

Lung CT image segmentation is a key process in many applications such as lung cancer detection. It is considered a challenging problem due to existing similar image densities in the pulmonary structures, different types of scanners, and scanning protocols. Most of the current semi-automatic segmentation methods rely on human factors therefore it might suffer from lack of accuracy. Another shortcoming of these methods is their high false-positive rate. In recent years, several approaches, based on a deep learning framework, have been effectively applied in medical image segmentation. Among existing deep neural networks, the U-Net has provided great success in this field. In this paper, we propose a deep neural network architecture to perform an automatic lung CT image segmentation process. In the proposed method, several extensive preprocessing techniques are applied to raw CT images. Then, ground truths corresponding to these images are extracted via some morphological operations and manual reforms. Finally, all the prepared images with the corresponding ground truth are fed into a modified U-Net in which the encoder is replaced with a pre-trained ResNet-34 network (referred to as Res BCDU-Net). In the architecture, we employ BConvLSTM (Bidirectional Convolutional Long Short-term Memory)as an advanced integrator module instead of simple traditional concatenators. This is to merge the extracted feature maps of the corresponding contracting path into the previous expansion of the up-convolutional layer. Finally, a densely connected convolutional layer is utilized for the contracting path. The results of our extensive experiments on lung CT images (LIDC-IDRI database) confirm the effectiveness of the proposed method where a dice coefficient index of 97.31% is achieved.


10.2196/26151 ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. e26151
Author(s):  
Stanislav Nikolov ◽  
Sam Blackwell ◽  
Alexei Zverovitch ◽  
Ruheena Mendes ◽  
Michelle Livne ◽  
...  

Background Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. Objective Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. Methods The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. Results We demonstrated the model’s clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model’s generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. Conclusions Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.


2021 ◽  
Author(s):  
weijun chen ◽  
Cheng Wang ◽  
Wenming Zhan ◽  
Yongshi Jia ◽  
Fangfang Ruan ◽  
...  

Abstract Background:Radiotherapy requires the target area and the organs at risk to be contoured on the CT image of the patient. During the process of organs-at-Risk (OAR) of the chest and abdomen, the doctor needs to contour at each CT image. The delineations of large and varied shapes are time-consuming and laborious.This study aims to evaluate the results of two automatic contouring software on OAR definition of CT images of lung cancer and rectal cancer patients. Methods: The CT images of 15 patients with rectal cancer and 15 patients with lung cancer were selected separately, and the organs at risk were outlined by the same experienced doctor as references, and then the same datasets were automatically contoured based on AiContour®© (Manufactured by Linking MED, China) and Raystation®© (Manufactured by Raysearch, Sweden) respectively. Overlap index (OI), Dice similarity index (DSC) and Volume difference (DV) were evaluated based on the auto-contours, and independent-sample t-test analysis is applied to the results. Results: The results of AiContour®© on OI and DSC were better than that of Raystation®© with statistical difference. There was no significant difference in DV between the results of two software. Conclusions: With AiContour®©, auto-contouring results of most organs in the chest and abdomen are good, and with slight modification, it can meet the clinical requirements for planning. With Raystation®©, auto-contouring results in most OAR is not as good as AiContour®©, and only the auto-contouring results of some organs can be used clinically after modification.


Sign in / Sign up

Export Citation Format

Share Document