scholarly journals Deep learning vs. atlas-based models for fast auto-segmentation of the masticatory muscles on head and neck CT images

2020 ◽  
Author(s):  
Wen Chen ◽  
Brandon A Dyer ◽  
Xue Feng ◽  
Yimin Li ◽  
Shyam Rao ◽  
...  

Abstract Background: Trismus is caused by impaired function of masticatory muscles. Routine delineation of these muscles during planning may improve dose tracking and facilitate dose reduction resulting in decreased radiation-related trismus. This study aimed to compare a deep learning model vs. a commercial atlas-based model for fast auto-segmentation of the masticatory muscles on head and neck computed tomography (CT) images. Material and methods: Paired masseter (M), temporalis (T), medial and lateral pterygoid (MP, LP) muscles were manually segmented on 56 CT images. CT images were randomly divided into training (n=27) and validation (n=29) cohorts. Two methods were used for automatic delineation of masticatory muscles (MMs): Deep learning auto-segmentation (DLAS) and atlas-based auto-segmentation (ABAS). Quantitative assessment of automatic versus manually segmented contours were performed using Dice similarity coefficient (DSC), recall, precision, Hausdorff distance (HD), HD95, and mean surface distance (MSD). The interobserver variability in manual segmentation of MMs was also evaluated. Differences in dose (∆Dose) to MMs for DLAS and ABAS segmentations were assessed. A paired t-test was used to compare the geometric and dosimetric difference between DLAS and ABAS methods.Results: DLAS outperformed ABAS in delineating all MMs (p < 0.05). The DLAS mean DSC for M, T, MP, and LP ranged between 0.83±0.03 to 0.89±0.02, the ABAS mean DSC ranged between 0.79±0.05 to 0.85±0.04. The mean value for recall, precision, HD, HD95, MSD also improved with DLAS for auto-segmentation and were close to the mean interobserver variation. With few exceptions, ∆D99%, ∆D95%, ∆D50%, and ∆D1% for all structures were below 10% for DLAS and ABAS and had no detectable statistical difference (P >0.05). DLAS based contours have dose endpoints more closely matched with that of the manually segmented when compared with ABAS. Conclusions: DLAS auto-segmentation of masticatory muscles for the head and neck radiotherapy had improved segmentation accuracy compared with ABAS with no qualitative difference in dosimetric endpoints compared to manually segmented contours.

2020 ◽  
Author(s):  
Wen Chen ◽  
Yimin Li ◽  
Brandon A Dyer ◽  
Xue Feng ◽  
Shyam Rao ◽  
...  

Abstract Background: Impaired function of masticatory muscles will lead to trismus. Routine delineation of these muscles during planning may improve dose tracking and facilitate dose reduction resulting in decreased radiation-related trismus. This study aimed to compare a deep learning model with a commercial atlas-based model for fast auto-segmentation of the masticatory muscles on head and neck computed tomography (CT) images. Material and methods: Paired masseter (M), temporalis (T), medial and lateral pterygoid (MP, LP) muscles were manually segmented on 56 CT images. CT images were randomly divided into training (n=27) and validation (n=29) cohorts. Two methods were used for automatic delineation of masticatory muscles (MMs): Deep learning auto-segmentation (DLAS) and atlas-based auto-segmentation (ABAS). The automatic algorithms were evaluated using Dice similarity coefficient (DSC), recall, precision, Hausdorff distance (HD), HD95, and mean surface distance (MSD). A consolidated score was calculated by normalizing the metrics against interobserver variability and averaging over all patients. Differences in dose (∆Dose) to MMs for DLAS and ABAS segmentations were assessed. A paired t-test was used to compare the geometric and dosimetric difference between DLAS and ABAS methods.Results: DLAS outperformed ABAS in delineating all MMs (p < 0.05). The DLAS mean DSC for M, T, MP, and LP ranged from 0.83±0.03 to 0.89±0.02, the ABAS mean DSC ranged from 0.79±0.05 to 0.85±0.04. The mean value for recall, HD, HD95, MSD also improved with DLAS for auto-segmentation. Interobserver variation revealed the highest variability in DSC and MSD for both T and MP, and the highest scores were achieved for T by both automatic algorithms. With few exceptions, the mean ∆D98%, ∆D95%, ∆D50%, and ∆D2% for all structures were below 10% for DLAS and ABAS and had no detectable statistical difference (P >0.05). DLAS based contours had dose endpoints more closely matched with that of the manually segmented when compared with ABAS. Conclusions: DLAS auto-segmentation of masticatory muscles for the head and neck radiotherapy had improved segmentation accuracy compared with ABAS with no qualitative difference in dosimetric endpoints compared to manually segmented contours.


2020 ◽  
Vol 15 (1) ◽  
Author(s):  
Wen Chen ◽  
Yimin Li ◽  
Brandon A. Dyer ◽  
Xue Feng ◽  
Shyam Rao ◽  
...  

2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Bin Huang ◽  
Zhewei Chen ◽  
Po-Man Wu ◽  
Yufeng Ye ◽  
Shi-Ting Feng ◽  
...  

Purpose. In this study, we proposed an automated deep learning (DL) method for head and neck cancer (HNC) gross tumor volume (GTV) contouring on positron emission tomography-computed tomography (PET-CT) images. Materials and Methods. PET-CT images were collected from 22 newly diagnosed HNC patients, of whom 17 (Database 1) and 5 (Database 2) were from two centers, respectively. An oncologist and a radiologist decided the gold standard of GTV manually by consensus. We developed a deep convolutional neural network (DCNN) and trained the network based on the two-dimensional PET-CT images and the gold standard of GTV in the training dataset. We did two experiments: Experiment 1, with Database 1 only, and Experiment 2, with both Databases 1 and 2. In both Experiment 1 and Experiment 2, we evaluated the proposed method using a leave-one-out cross-validation strategy. We compared the median results in Experiment 2 (GTVa) with the performance of other methods in the literature and with the gold standard (GTVm). Results. A tumor segmentation task for a patient on coregistered PET-CT images took less than one minute. The dice similarity coefficient (DSC) of the proposed method in Experiment 1 and Experiment 2 was 0.481∼0.872 and 0.482∼0.868, respectively. The DSC of GTVa was better than that in previous studies. A high correlation was found between GTVa and GTVm (R = 0.99, P<0.001). The median volume difference (%) between GTVm and GTVa was 10.9%. The median values of DSC, sensitivity, and precision of GTVa were 0.785, 0.764, and 0.789, respectively. Conclusion. A fully automatic GTV contouring method for HNC based on DCNN and PET-CT from dual centers has been successfully proposed with high accuracy and efficiency. Our proposed method is of help to the clinicians in HNC management.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 702
Author(s):  
Nalee Kim ◽  
Jaehee Chun ◽  
Jee Suk Chang ◽  
Chang Geol Lee ◽  
Ki Chang Keum ◽  
...  

This study investigated the feasibility of deep learning-based segmentation (DLS) and continual training for adaptive radiotherapy (RT) of head and neck (H&N) cancer. One-hundred patients treated with definitive RT were included. Based on 23 organs-at-risk (OARs) manually segmented in initial planning computed tomography (CT), modified FC-DenseNet was trained for DLS: (i) using data obtained from 60 patients, with 20 matched patients in the test set (DLSm); (ii) using data obtained from 60 identical patients with 20 unmatched patients in the test set (DLSu). Manually contoured OARs in adaptive planning CT for independent 20 patients were provided as test sets. Deformable image registration (DIR) was also performed. All 23 OARs were compared using quantitative measurements, and nine OARs were also evaluated via subjective assessment from 26 observers using the Turing test. DLSm achieved better performance than both DLSu and DIR (mean Dice similarity coefficient; 0.83 vs. 0.80 vs. 0.70), mainly for glandular structures, whose volume significantly reduced during RT. Based on subjective measurements, DLS is often perceived as a human (49.2%). Furthermore, DLSm is preferred over DLSu (67.2%) and DIR (96.7%), with a similar rate of required revision to that of manual segmentation (28.0% vs. 29.7%). In conclusion, DLS was effective and preferred over DIR. Additionally, continual DLS training is required for an effective optimization and robustness in personalized adaptive RT.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii359-iii359
Author(s):  
Lydia Tam ◽  
Edward Lee ◽  
Michelle Han ◽  
Jason Wright ◽  
Leo Chen ◽  
...  

Abstract BACKGROUND Brain tumors are the most common solid malignancies in childhood, many of which develop in the posterior fossa (PF). Manual tumor measurements are frequently required to optimize registration into surgical navigation systems or for surveillance of nonresectable tumors after therapy. With recent advances in artificial intelligence (AI), automated MRI-based tumor segmentation is now feasible without requiring manual measurements. Our goal was to create a deep learning model for automated PF tumor segmentation that can register into navigation systems and provide volume output. METHODS 720 pre-surgical MRI scans from five pediatric centers were divided into training, validation, and testing datasets. The study cohort comprised of four PF tumor types: medulloblastoma, diffuse midline glioma, ependymoma, and brainstem or cerebellar pilocytic astrocytoma. Manual segmentation of the tumors by an attending neuroradiologist served as “ground truth” labels for model training and evaluation. We used 2D Unet, an encoder-decoder convolutional neural network architecture, with a pre-trained ResNet50 encoder. We assessed ventricle segmentation accuracy on a held-out test set using Dice similarity coefficient (0–1) and compared ventricular volume calculation between manual and model-derived segmentations using linear regression. RESULTS Compared to the ground truth expert human segmentation, overall Dice score for model performance accuracy was 0.83 for automatic delineation of the 4 tumor types. CONCLUSIONS In this multi-institutional study, we present a deep learning algorithm that automatically delineates PF tumors and outputs volumetric information. Our results demonstrate applied AI that is clinically applicable, potentially augmenting radiologists, neuro-oncologists, and neurosurgeons for tumor evaluation, surveillance, and surgical planning.


2021 ◽  
Vol 87 (4) ◽  
pp. 283-293
Author(s):  
Wei Wang ◽  
Yuan Xu ◽  
Yingchao Ren ◽  
Gang Wang

Recently, performance improvement in facade parsing from 3D point clouds has been brought about by designing more complex network structures, which cost huge computing resources and do not take full advantage of prior knowledge of facade structure. Instead, from the perspective of data distribution, we construct a new hierarchical mesh multi-view data domain based on the characteristics of facade objects to achieve fusion of deep-learning models and prior knowledge, thereby significantly improving segmentation accuracy. We comprehensively evaluate the current mainstream method on the RueMonge 2014 data set and demonstrate the superiority of our method. The mean intersection-over-union index on the facade-parsing task reached 76.41%, which is 2.75% higher than the current best result. In addition, through comparative experiments, the reasons for the performance improvement of the proposed method are further analyzed.


1987 ◽  
Vol 101 (8) ◽  
pp. 819-822 ◽  
Author(s):  
Harbans Lal ◽  
H. C. Madan ◽  
G. S. Kohli ◽  
S. P. S. Yadav

AbstractSerum aliesterase levels have been estimated in 38 patients with head and neck cancer. The mean value was significantly lower than in controls. The decrease in activity was greater in patients with ulcerative growths and it progressed with advancement in the stage of cancer.With radiotherapy, a progressive and significant increase in serum aliesterase activity was observed. In patients with non-malignant growths the activity was comparable with that in controls.


2021 ◽  
Author(s):  
Hoon Ko ◽  
Jimi Huh ◽  
Kyung Won Kim ◽  
Heewon Chung ◽  
Yousun Ko ◽  
...  

BACKGROUND Detection and quantification of intraabdominal free fluid (i.e., ascites) on computed tomography (CT) are essential processes to find emergent or urgent conditions in patients. In an emergent department, automatic detection and quantification of ascites will be beneficial. OBJECTIVE We aimed to develop an artificial intelligence (AI) algorithm for the automatic detection and quantification of ascites simultaneously using a single deep learning model (DLM). METHODS 2D deep learning models (DLMs) based on a deep residual U-Net, U-Net, bi-directional U-Net, and recurrent residual U-net were developed to segment areas of ascites on an abdominopelvic CT. Based on segmentation results, the DLMs detected ascites by classifying CT images into ascites images and non-ascites images. The AI algorithms were trained using 6,337 CT images from 160 subjects (80 with ascites and 80 without ascites) and tested using 1,635 CT images from 40 subjects (20 with ascites and 20 without ascites). The performance of AI algorithms was evaluated for diagnostic accuracy of ascites detection and for segmentation accuracy of ascites areas. Of these DLMs, we proposed an AI algorithm with the best performance. RESULTS The segmentation accuracy was the highest in the deep residual U-Net with a mean intersection over union (mIoU) value of 0.87, followed by U-Net, bi-directional U-Net, and recurrent residual U-net (mIoU values 0.80, 0.77, and 0.67, respectively). The detection accuracy was the highest in the deep residual U-net (0.96), followed by U-Net, bi-directional U-net, and recurrent residual U-net (0.90, 0.88, and 0.82, respectively). The deep residual U-net also achieved high sensitivity (0.96) and high specificity (0.96). CONCLUSIONS We propose the deep residual U-net-based AI algorithm for automatic detection and quantification of ascites on abdominopelvic CT scans, which provides excellent performance.


Author(s):  
Mostafa El Habib Daho ◽  
Amin Khouani ◽  
Mohammed El Amine Lazouni ◽  
Sidi Ahmed Mahmoudi

2020 ◽  
Vol 10 (4) ◽  
pp. 211 ◽  
Author(s):  
Yong Joon Suh ◽  
Jaewon Jung ◽  
Bum-Joo Cho

Mammography plays an important role in screening breast cancer among females, and artificial intelligence has enabled the automated detection of diseases on medical images. This study aimed to develop a deep learning model detecting breast cancer in digital mammograms of various densities and to evaluate the model performance compared to previous studies. From 1501 subjects who underwent digital mammography between February 2007 and May 2015, craniocaudal and mediolateral view mammograms were included and concatenated for each breast, ultimately producing 3002 merged images. Two convolutional neural networks were trained to detect any malignant lesion on the merged images. The performances were tested using 301 merged images from 284 subjects and compared to a meta-analysis including 12 previous deep learning studies. The mean area under the receiver-operating characteristic curve (AUC) for detecting breast cancer in each merged mammogram was 0.952 ± 0.005 by DenseNet-169 and 0.954 ± 0.020 by EfficientNet-B5, respectively. The performance for malignancy detection decreased as breast density increased (density A, mean AUC = 0.984 vs. density D, mean AUC = 0.902 by DenseNet-169). When patients’ age was used as a covariate for malignancy detection, the performance showed little change (mean AUC, 0.953 ± 0.005). The mean sensitivity and specificity of the DenseNet-169 (87 and 88%, respectively) surpassed the mean values (81 and 82%, respectively) obtained in a meta-analysis. Deep learning would work efficiently in screening breast cancer in digital mammograms of various densities, which could be maximized in breasts with lower parenchyma density.


Sign in / Sign up

Export Citation Format

Share Document