scholarly journals Deep learning-based GTV contouring modeling inter- and intra-observer variability in sarcomas

Author(s):  
Thibault Marin ◽  
Yue Zhuo ◽  
Rita Maria Lahoud ◽  
Fei Tian ◽  
Xiaoyue Ma ◽  
...  
Author(s):  
Rasha M. Al-Eidan ◽  
Hend Al-Khalifa ◽  
AbdulMalik Alsalman

The traditional standards employed for pain assessment have many limitations. One such limitation is reliability because of inter-observer variability. Therefore, there have been many approaches to automate the task of pain recognition. Recently, deep-learning methods have appeared to solve many challenges, such as feature selection and cases with a small number of data sets. This study provides a systematic review of pain-recognition systems that are based on deep-learning models for the last two years only. Furthermore, it presents the major deep-learning methods that were used in review papers. Finally, it provides a discussion of the challenges and open issues.


2020 ◽  
Vol 10 (4) ◽  
pp. 141
Author(s):  
Rasoul Sali ◽  
Nazanin Moradinasab ◽  
Shan Guleria ◽  
Lubaina Ehsan ◽  
Philip Fernandes ◽  
...  

The gold standard of histopathology for the diagnosis of Barrett’s esophagus (BE) is hindered by inter-observer variability among gastrointestinal pathologists. Deep learning-based approaches have shown promising results in the analysis of whole-slide tissue histopathology images (WSIs). We performed a comparative study to elucidate the characteristics and behaviors of different deep learning-based feature representation approaches for the WSI-based diagnosis of diseased esophageal architectures, namely, dysplastic and non-dysplastic BE. The results showed that if appropriate settings are chosen, the unsupervised feature representation approach is capable of extracting more relevant image features from WSIs to classify and locate the precursors of esophageal cancer compared to weakly supervised and fully supervised approaches.


2020 ◽  
Author(s):  
Jennifer P. Kieselmann ◽  
Clifton D. Fuller ◽  
Oliver J. Gurney-Champion ◽  
Uwe Oelfke

AbstractAdaptive online MRI-guided radiotherapy of head and neck cancer requires the reliable segmentation of the parotid glands as important organs at risk in clinically acceptable time frames. This can hardly be achieved by manual contouring. We therefore designed deep learning-based algorithms which automatically perform this task.Imaging data comprised two datasets: 27 patient MR images (T1-weighted and T2-weighted) and eight healthy volunteer MR images (T2-weighted), together with manually drawn contours by an expert. We used four different convolutional neural network (CNN) designs that each processed the data differently, varying the dimensionality of the input. We assessed the segmentation accuracy calculating the Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean surface distance (MSD) between manual and auto-generated contours. We benchmarked the developed methods by comparing to the inter-observer variability and to atlas-based segmentation. Additionally, we assessed the generalisability, strengths and limitations of deep learning-based compared to atlas-based methods in the independent volunteer test dataset.With a mean DSC of 0.85± 0.11 and mean MSD of 1.82 ±1.94 mm, a 2D CNN could achieve an accuracy comparable to that of an atlas-based method (DSC: 0.85 ±0.05, MSD: 1.67 ±1.21 mm) and the inter-observer variability (DSC: 0.84 ±0.06, MSD: 1.50 ±0.77 mm) but considerably faster (<1s v.s. 45 min). Adding information (adjacent slices, fully 3D or multi-modality) did not further improve the accuracy. With additional preprocessing steps, the 2D CNN was able to generalise well for the fully independent volunteer dataset (DSC: 0.79 ±0.10, MSD: 1.72 ±0.96 mm)We demonstrated the enormous potential for the application of CNNs to segment the parotid glands for online MRI-guided radiotherapy. The short computation times render deep learning-based methods suitable for online treatment planning workflows.


2021 ◽  
Vol 7 (2) ◽  
pp. 879-882
Author(s):  
Elmer Jeto Gomes Ataide ◽  
Shubham Agrawal ◽  
Aishwarya Jauhari ◽  
Axel Boese ◽  
Alfredol Illanes ◽  
...  

Abstract Ultrasound (US) imaging is used as a preliminary diagnostic tool for the detection, risk-stratification and classification of thyroid nodules. In order to perform the risk stratification of nodules in US images physicians first need to effectively detect the nodules. This process is affected due to the presence of inter-observer and intra-observer variability and subjectivity. Computer Aided Diagnostic tools prove to be a step in the right direction towards reducing the issue of subjectivity and observer variability. Several segmentation techniques have been proposed, from these Deep Learning techniques have yielded promising results. This work presents a comparison between four state of the art (SOTA) Deep Learning segmentation algorithms (UNet, SUMNet, ResUNet and Attention UNet). Each network was trained on the same dataset and the results are compared using performance metrics such as accuracy, dice coefficient and Intersection over Union (IoU) to determine the most effective in terms of thyroid nodule segmentation in US images. It was found that ResUNet performed the best with an accuracy, dice coefficient and IoU of 89.2%, 0.857, 0.767. The aim is to use the trained algorithm in the development of a Computer Aided Diagnostic system for the detection, riskstratification and classification of thyroid nodules using US images to reduce subjectivity and observer variability


2020 ◽  
Vol 10 (17) ◽  
pp. 5984
Author(s):  
Rasha M. Al-Eidan ◽  
Hend Al-Khalifa ◽  
AbdulMalik Al-Salman

Traditional standards employed for pain assessment have many limitations. One such limitation is reliability linked to inter-observer variability. Therefore, there have been many approaches to automate the task of pain recognition. Recently, deep-learning methods have appeared to solve many challenges such as feature selection and cases with a small number of data sets. This study provides a systematic review of pain-recognition systems that are based on deep-learning models for the last two years. Furthermore, it presents the major deep-learning methods used in the review papers. Finally, it provides a discussion of the challenges and open issues.


2021 ◽  
Vol 23 (1) ◽  
Author(s):  
Ricardo A. Gonzales ◽  
Felicia Seemann ◽  
Jérôme Lamy ◽  
Hamid Mojibian ◽  
Dan Atar ◽  
...  

Abstract Background Mitral annular plane systolic excursion (MAPSE) and left ventricular (LV) early diastolic velocity (e’) are key metrics of systolic and diastolic function, but not often measured by cardiovascular magnetic resonance (CMR). Its derivation is possible with manual, precise annotation of the mitral valve (MV) insertion points along the cardiac cycle in both two and four-chamber long-axis cines, but this process is highly time-consuming, laborious, and prone to errors. A fully automated, consistent, fast, and accurate method for MV plane tracking is lacking. In this study, we propose MVnet, a deep learning approach for MV point localization and tracking capable of deriving such clinical metrics comparable to human expert-level performance, and validated it in a multi-vendor, multi-center clinical population. Methods The proposed pipeline first performs a coarse MV point annotation in a given cine accurately enough to apply an automated linear transformation task, which standardizes the size, cropping, resolution, and heart orientation, and second, tracks the MV points with high accuracy. The model was trained and evaluated on 38,854 cine images from 703 patients with diverse cardiovascular conditions, scanned on equipment from 3 main vendors, 16 centers, and 7 countries, and manually annotated by 10 observers. Agreement was assessed by the intra-class correlation coefficient (ICC) for both clinical metrics and by the distance error in the MV plane displacement. For inter-observer variability analysis, an additional pair of observers performed manual annotations in a randomly chosen set of 50 patients. Results MVnet achieved a fast segmentation (<1 s/cine) with excellent ICCs of 0.94 (MAPSE) and 0.93 (LV e’) and a MV plane tracking error of −0.10 ± 0.97 mm. In a similar manner, the inter-observer variability analysis yielded ICCs of 0.95 and 0.89 and a tracking error of −0.15 ± 1.18 mm, respectively. Conclusion A dual-stage deep learning approach for automated annotation of MV points for systolic and diastolic evaluation in CMR long-axis cine images was developed. The method is able to carefully track these points with high accuracy and in a timely manner. This will improve the feasibility of CMR methods which rely on valve tracking and increase their utility in a clinical setting.


2021 ◽  
Author(s):  
Sekeun Kim ◽  
Hyung-Bok Park ◽  
Jaeik Jeon ◽  
Reza Arsanjani ◽  
Ran Heo ◽  
...  

Abstract Objectives: We aimed to compare the segmentation performance of the current prominent deep learning (DL) algorithms with ground-truth segmentations and to validate the reproducibility of the manually created 2D echocardiographic four cardiac chamber ground-truth annotation.Background: Recently emerged DL based fully-automated chamber segmentation and function assessment methods have shown great potential for future application in aiding image acquisition, quantification, and suggestion for diagnosis. However, the performance of current DL algorithms have not previously been compared with each other. In addition, the reproducibility of ground-truth annotations which are the basis of these algorithms have not yet been fully validated.Methods: We retrospectively enrolled 500 consecutive patients who underwent transthoracic echocardiogram (TTE) from December 2019 to December 2020. Simple U-net, Res-U-net, and Dense-U-net algorithms were compared for the segmentation performances and clinical indices such as left atrial volume (LAV), left ventricular end diastolic volume (LVEDV), LV end systolic volume (LVESV), LV mass, and ejection fraction (EF) were evaluated. The inter- and intra- observer variability analysis was performed by two expert sonographers for a randomly selected echocardiographic view in 100 patients (apical 2-chamber, apical 4-chamber, and parasternal short axis views).Results: The overall performance of all DL methods was excellent (average Dice similarity coefficient (DSC) 0.91 to 0.95 and average Intersection over union (IOU) 0.83 to 0.90), with the exception of LV wall area on PSAX view (average DSC of 0.83, IOU 0.72). In addition, there were no significant difference in clinical indices between ground truth and automated DL measurements. For inter- and intra observer variability analysis, the overall intra observer reproducibility was excellent: LAV (ICC = 0.995), LVEDV (ICC = 0.996), LVESV (ICC = 0.997), LV mass (ICC = 0.991) and EF (ICC = 0.984). The inter-observer reproducibility was slightly lower as compared to intraobserver agreement: LAV (ICC = 0.976), LVEDV (ICC = 0.982), LVESV (ICC =0.970), LV mass (ICC = 0.971), and EF (ICC = 0.899).Conclusions: The three current prominent DL-based fully automated methods are able to reliably perform four-chamber segmentation and quantification of clinical indices. Furthermore, we were able to validate the four cardiac chamber ground-truth annotation and demonstrate an overall excellent reproducibility, but still with some degree of inter-observer variability.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yiyu Hong ◽  
You Jeong Heo ◽  
Binnari Kim ◽  
Donghwan Lee ◽  
Soomin Ahn ◽  
...  

AbstractThe tumor–stroma ratio (TSR) determined by pathologists is subject to intra- and inter-observer variability. We aimed to develop a computational quantification method of TSR using deep learning-based virtual cytokeratin staining algorithms. Patients with 373 advanced (stage III [n = 171] and IV [n = 202]) gastric cancers were analyzed for TSR. Moderate agreement was observed, with a kappa value of 0.623, between deep learning metrics (dTSR) and visual measurement by pathologists (vTSR) and the area under the curve of receiver operating characteristic of 0.907. Moreover, dTSR was significantly associated with the overall survival of the patients (P = 0.0024). In conclusion, we developed a virtual cytokeratin staining and deep learning-based TSR measurement, which may aid in the diagnosis of TSR in gastric cancer.


Sign in / Sign up

Export Citation Format

Share Document