scholarly journals Deep learning-based auto-segmentation of swallowing and chewing structures

2019 ◽  
Author(s):  
Aditi Iyer ◽  
Maria Thor ◽  
Rabia Haq ◽  
Joseph O. Deasy ◽  
Aditya P. Apte

AbstractPurposeDelineating the swallowing and chewing structures in Head and Neck (H&N) CT scans is necessary for radiotherapy treatment (RT) planning to reduce the incidence of radiation-induced dysphagia, trismus, and speech dysfunction. Automating this process would decrease the manual input required and yield reproducible segmentations, but generating accurate segmentations is challenging due to the complex morphology of swallowing and chewing structures and limited soft tissue contrast in CT images.MethodsWe trained deep learning models using 194 H&N CT scans from our institution to segment the masseters (left and right), medial pterygoids (left and right), larynx, and pharyngeal constrictor muscle using DeepLabV3+ with the resnet-101 backbone. Models were trained in a sequential manner to guide the localization of each structure group based on prior segmentations. Additionally, an ensemble of models was developed using contextual information from three different views (axial, coronal, and sagittal), for robustness to occasional failures of the individual models. Output probability maps were averaged, and voxels were assigned labels corresponding to the class with the highest combined probability.ResultsThe median dice similarity coefficients (DSC) computed on a hold-out set of 24 CT scans were 0.87±0.02 for the masseters, 0.80±0.03 for the medial pterygoids, 0.81±0.04 for the larynx, and 0.69±0.07for the constrictor muscle. The corresponding 95th percentile Hausdorff distances were 0.32±0.08cm (masseters), 0.42±0.2cm (medial pterygoids), 0.53±0.3cm (larynx), and 0.36±0.15cm (constrictor muscle). Dose-volume histogram (DVH) metrics previously found to correlate with each toxicity were extracted from manual and auto-generated contours and compared between the two sets of contours to assess clinical utility. Differences in DVH metrics were not found to be statistically significant (p>0.05) for any of the structures. Further, inter-observer variability in contouring was studied in 10 CT scans. Automated segmentations were found to agree better with each of the observers as compared to inter-observer agreement, measured in terms of DSC.ConclusionsWe developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT. The resulting segmentations can be included in treatment planning to limit complications following RT for H&N cancer. The segmentation models developed in this work are distributed for research use through the open-source platform CERR, accessible at https://github.com/cerr/CERR.

Author(s):  
Aditi Iyer ◽  
Maria Thor ◽  
Ifeanyirochukwu Onochie ◽  
Jennifer Hesse ◽  
Kaveh Zakeri ◽  
...  

Abstract ObjectiveDelineating swallowing and chewing structures aids in radiotherapy (RT) treatment planning to limit dysphagia, trismus, and speech dysfunction. We aim to develop an accurate and efficient method to automate this process.ApproachCT scans of 242 head and neck (H&N) cancer patients acquired from 2004-2009 at our institution were used to develop auto-segmentation models for the masseters, medial pterygoids, larynx, and pharyngeal constrictor muscle using DeepLabV3+. A cascaded architecture was used, wherein models were trained sequentially to spatially constrain each structure group based on prior segmentations. Additionally, an ensemble of models, combining contextual information from axial, coronal, and sagittal views was used to improve segmentation accuracy. Prospective evaluation was conducted by measuring the amount of manual editing required in 91 H&N CT scans acquired February-May 2021.Main resultsMedians and inter-quartile ranges of Dice Similarity Coefficients (DSC) computed on the retrospective testing set (N=24) were 0.87 (0.85-0.89) for the masseters, 0.80 (0.79- 0.81) for the medial pterygoids, 0.81 (0.79-0.84) for the larynx, and 0.69 (0.67-0.71) for the constrictor. Auto-segmentations, when compared to inter-observer variability in 10 randomly selected scans, showed better agreement (DSC) with each observer as compared to inter-observer DSC. Prospective analysis showed most manual modifications needed for clinical use were minor, suggesting auto-contouring could increase clinical efficiency. Trained segmentation models are available for research use upon request via https://github.com/cerr/CERR/wiki/Auto-Segmentation-models.SignificanceWe developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT and demonstrated its potential for use in treatment planning to limit complications post-RT. To the best of our knowledge, this is the only prospectively-validated deep learning-based model for segmenting chewing and swallowing structures in CT. Additionally, the segmentation models have been made open-source to facilitate reproducibility and multi-institutional research.


Author(s):  
Trung Minh Nguyen ◽  
Thien Huu Nguyen

The previous work for event extraction has mainly focused on the predictions for event triggers and argument roles, treating entity mentions as being provided by human annotators. This is unrealistic as entity mentions are usually predicted by some existing toolkits whose errors might be propagated to the event trigger and argument role recognition. Few of the recent work has addressed this problem by jointly predicting entity mentions, event triggers and arguments. However, such work is limited to using discrete engineering features to represent contextual information for the individual tasks and their interactions. In this work, we propose a novel model to jointly perform predictions for entity mentions, event triggers and arguments based on the shared hidden representations from deep learning. The experiments demonstrate the benefits of the proposed method, leading to the state-of-the-art performance for event extraction.


10.29007/j56f ◽  
2018 ◽  
Author(s):  
Hooman Esfandiari ◽  
Carolyn Anglin ◽  
Pierre Guy ◽  
Antony J. Hodgson

Pedicle screw fixation is a common yet technically demanding procedure. Due to the proximity of the inserted implant to the spinal column, a malplaced screw can cause neurological injury and subsequent postoperative complications. A common surgical routine starts with preoperative volumetric image acquisition (e.g. computed tomography) based on which the surgeons can highlight the planned trajectory. This process is generally done manually , which is error prone and time consuming.The primary purpose of this paper is to develop an automatic pedicle region localization based on preoperative CTs. This system can automatically annotate the CT scans to identify the regions corresponding to the pedicles and thus provide important information about the anatomical placement of the CT scan that can be useful for intraoperative implant position assessment (e.g. to initialize the 2D-3D registration). On the other hand, the pedicle localization can be exploited for preoperative planning.We designed and evaluated a fully convolutional neural network for the task of pedicle localization. A large training, validation and testing datasets (5000, 1000, 1000 images separately) were created using a custom data augmentation process that could generate unique vertebral morphologies for each image. After evaluation on the validation and test data, the Dice similarity coefficients between the pedicle regions detected by the trained network and the ground truth was 0.85 and 0.83 respectively.The proposed deep-learning-based algorithm was capable of automatically localizing the regions corresponding to the pedicles based on the preoperative CT scans. Therefore, a reliable initial guess for the 2D-3D registration process needed for intraoperative implant position assessment can be achieved. This system also has potential use in automating the preoperative planning.


2003 ◽  
Vol 25 (2) ◽  
pp. 165-169
Author(s):  
Paul R. J. Duffy ◽  
Olivia Lelong

Summary An archaeological excavation was carried out at Graham Street, Leith, Edinburgh by Glasgow University Archaeological Research Division (GUARD) as part of the Historic Scotland Human Remains Call-off Contract following the discovery of human remains during machine excavation of a foundation trench for a new housing development. Excavation demonstrated that the burial was that of a young adult male who had been interred in a supine position with his head orientated towards the north. Radiocarbon dates obtained from a right tibia suggest the individual died between the 15th and 17th centuries AD. Little contextual information exists in documentary or cartographic sources to supplement this scant physical evidence. Accordingly, it is difficult to further refine the context of burial, although a possible link with a historically attested siege or a plague cannot be discounted.


2020 ◽  
Vol 152 ◽  
pp. S949
Author(s):  
L. Bokhorst ◽  
M.H.F. Savenije ◽  
M.P.W. Intven ◽  
C.A.T. Van den Berg

Author(s):  
Vlad Vasilescu ◽  
Ana Neacsu ◽  
Emilie Chouzenoux ◽  
Jean-Christophe Pesquet ◽  
Corneliu Burileanu

2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.


Author(s):  
Wenjia Cai ◽  
Jie Xu ◽  
Ke Wang ◽  
Xiaohong Liu ◽  
Wenqin Xu ◽  
...  

Abstract Anterior segment eye diseases account for a significant proportion of presentations to eye clinics worldwide, including diseases associated with corneal pathologies, anterior chamber abnormalities (e.g. blood or inflammation) and lens diseases. The construction of an automatic tool for the segmentation of anterior segment eye lesions will greatly improve the efficiency of clinical care. With research on artificial intelligence progressing in recent years, deep learning models have shown their superiority in image classification and segmentation. The training and evaluation of deep learning models should be based on a large amount of data annotated with expertise, however, such data are relatively scarce in the domain of medicine. Herein, the authors developed a new medical image annotation system, called EyeHealer. It is a large-scale anterior eye segment dataset with both eye structures and lesions annotated at the pixel level. Comprehensive experiments were conducted to verify its performance in disease classification and eye lesion segmentation. The results showed that semantic segmentation models outperformed medical segmentation models. This paper describes the establishment of the system for automated classification and segmentation tasks. The dataset will be made publicly available to encourage future research in this area.


Author(s):  
Anne-Sophie van der Post ◽  
Sjoerd Jens ◽  
Frank F. Smithuis ◽  
Miryam C. Obdeijn ◽  
Roelof-Jan Oostra ◽  
...  

Abstract Objective The objective of the study is to provide a reference for morphology, homogeneity, and signal intensity of triangular fibrocartilage complex (TFCC) and TFCC-related MRI features in adolescents. Materials and methods Prospectively collected data on asymptomatic participants aged 12–18 years, between June 2015 and November 2017, were retrospectively analyzed. A radiograph was performed in all participants to determine skeletal age and ulnar variance. A 3-T MRI followed to assess TFCC components and TFCC-related features. A standardized scoring form, based on MRI definitions used in literature on adults, was used for individual assessment of all participants by four observers. Results per item were expressed as frequencies (percentages) of observations by all observers for all participants combined (n = 92). Inter-observer agreement was determined by the unweighted Fleiss’ kappa with 95% confidence intervals (95% CI). Results The cohort consisted of 23 asymptomatic adolescents (12 girls and 11 boys). Median age was 13.5 years (range 12.0–17.0). Median ulnar variance was −0.7 mm (range − 2.7–1.4). Median triangular fibrocartilage (TFC) thickness was 1.4 mm (range 0.1–2.9). Diffuse increased TFC signal intensity not reaching the articular surface was observed in 30 (33%) observations and a vertical linear increased signal intensity with TFC discontinuation in 19 (20%) observations. Discontinuation between the volar radioulnar ligament and the TFC in the sagittal plane was seen in 23 (25%) observations. The extensor carpi ulnaris was completely dislocated in 10 (11%) observations, more frequent in supinated wrists (p = 0.031). Inter-observer agreement ranged from poor to fair for scoring items on the individual TFCC components. Conclusion MRI findings, whether normal variation or asymptomatic abnormality, can be observed in TFCC and TFCC-related features of asymptomatic adolescents. The rather low inter-observer agreement underscores the challenges in interpreting these small structures on MRI. This should be taken into consideration when interpreting clinical MRIs and deciding upon arthroscopy.


Sign in / Sign up

Export Citation Format

Share Document