scholarly journals Progressive Generative Adversarial Networks: Deep Learning in Head and Neck Cancer CT Images to Synthesized PET Images Generation for Hybrid PET/CT Application

Author(s):  
Bin HUANG ◽  
Zhe-wei CHEN ◽  
Martin LAW ◽  
Shi-ting FENG ◽  
Qiao-liang LI ◽  
...  
2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Bin Huang ◽  
Zhewei Chen ◽  
Po-Man Wu ◽  
Yufeng Ye ◽  
Shi-Ting Feng ◽  
...  

Purpose. In this study, we proposed an automated deep learning (DL) method for head and neck cancer (HNC) gross tumor volume (GTV) contouring on positron emission tomography-computed tomography (PET-CT) images. Materials and Methods. PET-CT images were collected from 22 newly diagnosed HNC patients, of whom 17 (Database 1) and 5 (Database 2) were from two centers, respectively. An oncologist and a radiologist decided the gold standard of GTV manually by consensus. We developed a deep convolutional neural network (DCNN) and trained the network based on the two-dimensional PET-CT images and the gold standard of GTV in the training dataset. We did two experiments: Experiment 1, with Database 1 only, and Experiment 2, with both Databases 1 and 2. In both Experiment 1 and Experiment 2, we evaluated the proposed method using a leave-one-out cross-validation strategy. We compared the median results in Experiment 2 (GTVa) with the performance of other methods in the literature and with the gold standard (GTVm). Results. A tumor segmentation task for a patient on coregistered PET-CT images took less than one minute. The dice similarity coefficient (DSC) of the proposed method in Experiment 1 and Experiment 2 was 0.481∼0.872 and 0.482∼0.868, respectively. The DSC of GTVa was better than that in previous studies. A high correlation was found between GTVa and GTVm (R = 0.99, P<0.001). The median volume difference (%) between GTVm and GTVa was 10.9%. The median values of DSC, sensitivity, and precision of GTVa were 0.785, 0.764, and 0.789, respectively. Conclusion. A fully automatic GTV contouring method for HNC based on DCNN and PET-CT from dual centers has been successfully proposed with high accuracy and efficiency. Our proposed method is of help to the clinicians in HNC management.


Author(s):  
Yngve Mardal Moe ◽  
Aurora Rosvoll Groendahl ◽  
Oliver Tomic ◽  
Einar Dale ◽  
Eirik Malinen ◽  
...  

Abstract Purpose Identification and delineation of the gross tumour and malignant nodal volume (GTV) in medical images are vital in radiotherapy. We assessed the applicability of convolutional neural networks (CNNs) for fully automatic delineation of the GTV from FDG-PET/CT images of patients with head and neck cancer (HNC). CNN models were compared to manual GTV delineations made by experienced specialists. New structure-based performance metrics were introduced to enable in-depth assessment of auto-delineation of multiple malignant structures in individual patients. Methods U-Net CNN models were trained and evaluated on images and manual GTV delineations from 197 HNC patients. The dataset was split into training, validation and test cohorts (n= 142, n = 15 and n = 40, respectively). The Dice score, surface distance metrics and the new structure-based metrics were used for model evaluation. Additionally, auto-delineations were manually assessed by an oncologist for 15 randomly selected patients in the test cohort. Results The mean Dice scores of the auto-delineations were 55%, 69% and 71% for the CT-based, PET-based and PET/CT-based CNN models, respectively. The PET signal was essential for delineating all structures. Models based on PET/CT images identified 86% of the true GTV structures, whereas models built solely on CT images identified only 55% of the true structures. The oncologist reported very high-quality auto-delineations for 14 out of the 15 randomly selected patients. Conclusions CNNs provided high-quality auto-delineations for HNC using multimodality PET/CT. The introduced structure-wise evaluation metrics provided valuable information on CNN model strengths and weaknesses for multi-structure auto-delineation.


2014 ◽  
Vol 17 (2) ◽  
pp. 139-144 ◽  
Author(s):  
F. Arias ◽  
V. Chicata ◽  
M. J. García-Velloso ◽  
G. Asín ◽  
M. Uzcanga ◽  
...  

2012 ◽  
Vol 53 (11) ◽  
pp. 1730-1735 ◽  
Author(s):  
S.-C. Chan ◽  
H.-M. Wang ◽  
S.-H. Ng ◽  
C.-L. Hsu ◽  
Y.-J. Lin ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document