scholarly journals Tumor Segmentation in Patients with Head and Neck Cancers Using Deep Learning Based-on Multi-modality PET/CT Images

Author(s):  
Mohamed A. Naser ◽  
Lisanne V. van Dijk ◽  
Renjie He ◽  
Kareem A. Wahid ◽  
Clifton D. Fuller
2021 ◽  
Vol 161 ◽  
pp. S1374-S1376
Author(s):  
B.N. Huynh ◽  
A.R. Groendahl ◽  
Y.M. Moe ◽  
O. Tomic ◽  
E. Dale ◽  
...  

2019 ◽  
Vol 133 ◽  
pp. S557
Author(s):  
A. Rosvoll Groendahl ◽  
M. Mulstad ◽  
Y. Mardal Moe ◽  
I. Skjei Knudtsen ◽  
T. Torheim ◽  
...  

2007 ◽  
Vol 68 (3) ◽  
pp. 763-770 ◽  
Author(s):  
Stephen L. Breen ◽  
Julia Publicover ◽  
Shiroma De Silva ◽  
Greg Pond ◽  
Kristy Brock ◽  
...  

2021 ◽  
Author(s):  
Mohamed A. Naser ◽  
Kareem A. Wahid ◽  
Abdallah Sherif Radwan Mohamed ◽  
Moamen Abobakr Abdelaal ◽  
Renjie He ◽  
...  

Determining progression-free survival (PFS) for head and neck squamous cell carcinoma (HNSCC) patients is a challenging but pertinent task that could help stratify patients for improved overall outcomes. PET/CT images provide a rich source of anatomical and metabolic data for potential clinical biomarkers that would inform treatment decisions and could help improve PFS. In this study, we participate in the 2021 HECKTOR Challenge to predict PFS in a large dataset of HNSCC PET/CT images using deep learning approaches. We develop a series of deep learning models based on the DenseNet architecture using a negative log-likelihood loss function that utilizes PET/CT images and clinical data as separate input channels to predict PFS in days. Internal model validation based on 10-fold cross-validation using the training data (N=224) yielded C-index values up to 0.622 (without) and 0.842 (with) censoring status considered in C-index computation, respectively. We then implemented model ensembling approaches based on the training data cross-validation folds to predict the PFS of the test set patients (N=101). External validation on the test set for the best ensembling method yielded a C-index value of 0.694. Our results are a promising example of how deep learning approaches can effectively utilize imaging and clinical data for medical outcome prediction in HNSCC, but further work in optimizing these processes is needed.


2019 ◽  
Vol 133 ◽  
pp. S526
Author(s):  
A. Rosvoll Groendahl ◽  
A.D. Midtfjord ◽  
G.S. Elvatun Rakh Langberg ◽  
O. Tomic ◽  
U.G. Indahl ◽  
...  

2021 ◽  
Author(s):  
Kareem A. Wahid ◽  
Renjie He ◽  
Cem Dede ◽  
Abdallah Sherif Radwan Mohamed ◽  
Moamen Abobakr Abdelaal ◽  
...  

PET/CT images provide a rich data source for clinical prediction models in head and neck squamous cell carcinoma (HNSCC). Deep learning models often use images in an end-to-end fashion with clinical data or no additional input for predictions. However, in the context of HNSCC, the tumor region of interest may be an informative prior in the generation of improved prediction performance. In this study, we utilize a deep learning framework based on a DenseNet architecture to combine PET images, CT images, primary tumor segmentation masks, and clinical data as separate channels to predict progression-free survival (PFS) in days for HNSCC patients. Through internal validation (10-fold cross-validation) based on a large set of training data provided by the 2021 HECKTOR Challenge, we achieve a mean C-index of 0.855 +- 0.060 and 0.650 +- 0.074 when observed events are and are not included in the C-index calculation, respectively. Ensemble approaches applied to cross-validation folds yield C-index values up to 0.698 in the independent test set (external validation). Importantly, the value of the added segmentation mask is underscored in both internal and external validation by an improvement of the C-index when compared to models that do not utilize the segmentation mask. These promising results highlight the utility of including segmentation masks as additional input channels in deep learning pipelines for clinical outcome prediction in HNSCC.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Bin Huang ◽  
Zhewei Chen ◽  
Po-Man Wu ◽  
Yufeng Ye ◽  
Shi-Ting Feng ◽  
...  

Purpose. In this study, we proposed an automated deep learning (DL) method for head and neck cancer (HNC) gross tumor volume (GTV) contouring on positron emission tomography-computed tomography (PET-CT) images. Materials and Methods. PET-CT images were collected from 22 newly diagnosed HNC patients, of whom 17 (Database 1) and 5 (Database 2) were from two centers, respectively. An oncologist and a radiologist decided the gold standard of GTV manually by consensus. We developed a deep convolutional neural network (DCNN) and trained the network based on the two-dimensional PET-CT images and the gold standard of GTV in the training dataset. We did two experiments: Experiment 1, with Database 1 only, and Experiment 2, with both Databases 1 and 2. In both Experiment 1 and Experiment 2, we evaluated the proposed method using a leave-one-out cross-validation strategy. We compared the median results in Experiment 2 (GTVa) with the performance of other methods in the literature and with the gold standard (GTVm). Results. A tumor segmentation task for a patient on coregistered PET-CT images took less than one minute. The dice similarity coefficient (DSC) of the proposed method in Experiment 1 and Experiment 2 was 0.481∼0.872 and 0.482∼0.868, respectively. The DSC of GTVa was better than that in previous studies. A high correlation was found between GTVa and GTVm (R = 0.99, P<0.001). The median volume difference (%) between GTVm and GTVa was 10.9%. The median values of DSC, sensitivity, and precision of GTVa were 0.785, 0.764, and 0.789, respectively. Conclusion. A fully automatic GTV contouring method for HNC based on DCNN and PET-CT from dual centers has been successfully proposed with high accuracy and efficiency. Our proposed method is of help to the clinicians in HNC management.


2009 ◽  
Vol 36 (9) ◽  
pp. 1417-1424 ◽  
Author(s):  
Keisuke Yoshida ◽  
Akiko Suzuki ◽  
Toshiyuki Nagashima ◽  
Jin Lee ◽  
Choichi Horiuchi ◽  
...  

2020 ◽  
Vol 15 (1) ◽  
Author(s):  
Wen Chen ◽  
Yimin Li ◽  
Brandon A. Dyer ◽  
Xue Feng ◽  
Shyam Rao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document