Automatic Segmentation to Define Organs at Risk (OARs) for Function Sparing Head and Neck IMRT

2014 ◽  
Vol 90 (1) ◽  
pp. S876-S877
Author(s):  
D. Thomson ◽  
C. Boylan ◽  
T. Liptrot ◽  
A. Aitkenhead ◽  
L. Lee ◽  
...  
2021 ◽  
Vol 32 ◽  
pp. S793
Author(s):  
S. Datta ◽  
A. Traverso ◽  
S. Mehrkanoon ◽  
A. Briassouli

2014 ◽  
Vol 9 (1) ◽  
pp. 173 ◽  
Author(s):  
David Thomson ◽  
Chris Boylan ◽  
Tom Liptrot ◽  
Adam Aitkenhead ◽  
Lip Lee ◽  
...  

2021 ◽  
Vol 9 ◽  
Author(s):  
Wei Wang ◽  
Qingxin Wang ◽  
Mengyu Jia ◽  
Zhongqiu Wang ◽  
Chengwen Yang ◽  
...  

Purpose: A novel deep learning model, Siamese Ensemble Boundary Network (SEB-Net) was developed to improve the accuracy of automatic organs-at-risk (OARs) segmentation in CT images for head and neck (HaN) as well as small organs, which was verified for use in radiation oncology practice and is therefore proposed.Methods: SEB-Net was designed to transfer CT slices into probability maps for the HaN OARs segmentation purpose. Dual key contributions were made to the network design to improve the accuracy and reliability of automatic segmentation toward the specific organs (e.g., relatively tiny or irregularly shaped) without sacrificing the field of view. The first implements an ensemble of learning strategies with shared weights that aggregates the pixel-probability transfer at three orthogonal CT planes to ameliorate 3D information integrity; the second exploits the boundary loss that takes the form of a distance metric on the space of contours to mitigate the challenges of conventional region-based regularization, when applied to highly unbalanced segmentation scenarios. By combining the two techniques, enhanced segmentation could be expected by comprehensively maximizing inter- and intra-CT slice information. In total, 188 patients with HaN cancer were included in the study, of which 133 patients were randomly selected for training and 55 for validation. An additional 50 untreated cases were used for clinical evaluation.Results: With the proposed method, the average volumetric Dice similarity coefficient (DSC) of HaN OARs (and small organs) was 0.871 (0.900), which was significantly higher than the results from Ua-Net, Anatomy-Net, and SRM by 4.94% (26.05%), 7.80% (24.65%), and 12.97% (40.19%), respectively. By contrast, the average 95% Hausdorff distance (95% HD) of HaN OARs (and small organs) was 2.87 mm (0.81 mm), which improves the other three methods by 50.94% (75.45%), 88.41% (79.07%), and 5.59% (67.98%), respectively. After delineation by SEB-Net, 81.92% of all organs in 50 HaN cancer untreated cases did not require modification for clinical evaluation.Conclusions: In comparison to several cutting-edge methods, including Ua-Net, Anatomy-Net, and SRM, the proposed method is capable of substantially improving segmentation accuracy for HaN and small organs from CT imaging in terms of efficiency, feasibility, and applicability.


2019 ◽  
Vol 46 (5) ◽  
pp. 2204-2213 ◽  
Author(s):  
Jason W. Chan ◽  
Vasant Kearney ◽  
Samuel Haaf ◽  
Susan Wu ◽  
Madeleine Bogdanov ◽  
...  

F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2104 ◽  
Author(s):  
Carlton Chu ◽  
Jeffrey De Fauw ◽  
Nenad Tomasev ◽  
Bernardino Romera Paredes ◽  
Cían Hughes ◽  
...  

Radiotherapy is one of the main ways head and neck cancers are treated; radiation is used to kill cancerous cells and prevent their recurrence. Complex treatment planning is required to ensure that enough radiation is given to the tumour, and little to other sensitive structures (known as organs at risk) such as the eyes and nerves which might otherwise be damaged. This is especially difficult in the head and neck, where multiple at-risk structures often lie in extremely close proximity to the tumour. It can take radiotherapy experts four hours or more to pick out the important areas on planning scans (known as segmentation). This research will focus on applying machine learning algorithms to automatic segmentation of head and neck planning computed tomography (CT) and magnetic resonance imaging (MRI) scans at University College London Hospital NHS Foundation Trust patients. Through analysis of the images used in radiotherapy DeepMind Health will investigate improvements in efficiency of cancer treatment pathways.


2019 ◽  
Vol 104 (3) ◽  
pp. 677-684 ◽  
Author(s):  
Ward van Rooij ◽  
Max Dahele ◽  
Hugo Ribeiro Brandao ◽  
Alexander R. Delaney ◽  
Berend J. Slotman ◽  
...  

10.2196/26151 ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. e26151
Author(s):  
Stanislav Nikolov ◽  
Sam Blackwell ◽  
Alexei Zverovitch ◽  
Ruheena Mendes ◽  
Michelle Livne ◽  
...  

Background Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. Objective Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. Methods The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. Results We demonstrated the model’s clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model’s generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. Conclusions Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.


Sign in / Sign up

Export Citation Format

Share Document