scholarly journals EP-1890: Accurate organs at risk contour propagation in head and neck adaptive radiotherapy

2016 ◽  
Vol 119 ◽  
pp. S893-S894
Author(s):  
T.T. Zhai ◽  
H.P. Bijl ◽  
J.A. Langendijk ◽  
R.J. Steenbakkers ◽  
C.L. Brouwer ◽  
...  
2021 ◽  
Vol 3 ◽  
Author(s):  
Wen Chen ◽  
Yimin Li ◽  
Nimu Yuan ◽  
Jinyi Qi ◽  
Brandon A. Dyer ◽  
...  

Purpose: To assess image quality and uncertainty in organ-at-risk segmentation on cone beam computed tomography (CBCT) enhanced by deep-learning convolutional neural network (DCNN) for head and neck cancer.Methods: An in-house DCNN was trained using forty post-operative head and neck cancer patients with their planning CT and first-fraction CBCT images. Additional fifteen patients with repeat simulation CT (rCT) and CBCT scan taken on the same day (oCBCT) were used for validation and clinical utility assessment. Enhanced CBCT (eCBCT) images were generated from the oCBCT using the in-house DCNN. Quantitative imaging quality improvement was evaluated using HU accuracy, signal-to-noise-ratio (SNR), and structural similarity index measure (SSIM). Organs-at-risk (OARs) were delineated on o/eCBCT and compared with manual structures on the same day rCT. Contour accuracy was assessed using dice similarity coefficient (DSC), Hausdorff distance (HD), and center of mass (COM) displacement. Qualitative assessment of users’ confidence in manual segmenting OARs was performed on both eCBCT and oCBCT by visual scoring.Results: eCBCT organs-at-risk had significant improvement on mean pixel values, SNR (p < 0.05), and SSIM (p < 0.05) compared to oCBCT images. Mean DSC of eCBCT-to-rCT (0.83 ± 0.06) was higher than oCBCT-to-rCT (0.70 ± 0.13). Improvement was observed for mean HD of eCBCT-to-rCT (0.42 ± 0.13 cm) vs. oCBCT-to-rCT (0.72 ± 0.25 cm). Mean COM was less for eCBCT-to-rCT (0.28 ± 0.19 cm) comparing to oCBCT-to-rCT (0.44 ± 0.22 cm). Visual scores showed OAR segmentation was more accessible on eCBCT than oCBCT images.Conclusion: DCNN improved fast-scan low-dose CBCT in terms of the HU accuracy, image contrast, and OAR delineation accuracy, presenting potential of eCBCT for adaptive radiotherapy.


Author(s):  
Xianjin Dai ◽  
Yang Lei ◽  
Tonghe Wang ◽  
Anees Dhabaan ◽  
Mark McDonald ◽  
...  

2019 ◽  
Vol 104 (3) ◽  
pp. 677-684 ◽  
Author(s):  
Ward van Rooij ◽  
Max Dahele ◽  
Hugo Ribeiro Brandao ◽  
Alexander R. Delaney ◽  
Berend J. Slotman ◽  
...  

10.2196/26151 ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. e26151
Author(s):  
Stanislav Nikolov ◽  
Sam Blackwell ◽  
Alexei Zverovitch ◽  
Ruheena Mendes ◽  
Michelle Livne ◽  
...  

Background Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. Objective Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. Methods The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. Results We demonstrated the model’s clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model’s generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. Conclusions Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.


Sign in / Sign up

Export Citation Format

Share Document