scholarly journals Deep learning-based virtual cytokeratin staining of gastric carcinomas to measure tumor–stroma ratio

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yiyu Hong ◽  
You Jeong Heo ◽  
Binnari Kim ◽  
Donghwan Lee ◽  
Soomin Ahn ◽  
...  

AbstractThe tumor–stroma ratio (TSR) determined by pathologists is subject to intra- and inter-observer variability. We aimed to develop a computational quantification method of TSR using deep learning-based virtual cytokeratin staining algorithms. Patients with 373 advanced (stage III [n = 171] and IV [n = 202]) gastric cancers were analyzed for TSR. Moderate agreement was observed, with a kappa value of 0.623, between deep learning metrics (dTSR) and visual measurement by pathologists (vTSR) and the area under the curve of receiver operating characteristic of 0.907. Moreover, dTSR was significantly associated with the overall survival of the patients (P = 0.0024). In conclusion, we developed a virtual cytokeratin staining and deep learning-based TSR measurement, which may aid in the diagnosis of TSR in gastric cancer.

Author(s):  
Rasha M. Al-Eidan ◽  
Hend Al-Khalifa ◽  
AbdulMalik Alsalman

The traditional standards employed for pain assessment have many limitations. One such limitation is reliability because of inter-observer variability. Therefore, there have been many approaches to automate the task of pain recognition. Recently, deep-learning methods have appeared to solve many challenges, such as feature selection and cases with a small number of data sets. This study provides a systematic review of pain-recognition systems that are based on deep-learning models for the last two years only. Furthermore, it presents the major deep-learning methods that were used in review papers. Finally, it provides a discussion of the challenges and open issues.


2020 ◽  
Author(s):  
Brian J. Park ◽  
Vlasios S. Sotirchos ◽  
Jason Adleberg ◽  
S. William Stavropoulos ◽  
Tessa S. Cook ◽  
...  

AbstractPurposeThis study assesses the feasibility of deep learning detection and classification of 3 retrievable inferior vena cava filters with similar radiographic appearances and emphasizes the importance of visualization methods to confirm proper detection and classification.Materials and MethodsThe fast.ai library with ResNet-34 architecture was used to train a deep learning classification model. A total of 442 fluoroscopic images (N=144 patients) from inferior vena cava filter placement or removal were collected. Following image preprocessing, the training set included 382 images (110 Celect, 149 Denali, 123 Günther Tulip), of which 80% were used for training and 20% for validation. Data augmentation was performed for regularization. A random test set of 60 images (20 images of each filter type), not included in the training or validation set, was used for evaluation. Total accuracy and receiver operating characteristic area under the curve were used to evaluate performance. Feature heatmaps were visualized using guided backpropagation and gradient-weighted class activation mapping.ResultsThe overall accuracy was 80.2% with mean receiver operating characteristic area under the curve of 0.96 for the validation set (N=76), and 85.0% with mean receiver operating characteristic area under the curve of 0.94 for the test set (N=60). Two visualization methods were used to assess correct filter detection and classification.ConclusionsA deep learning model can be used to automatically detect and accurately classify inferior vena cava filters on radiographic images. Visualization techniques should be utilized to ensure deep learning models function as intended.


2020 ◽  
Vol 10 (4) ◽  
pp. 141
Author(s):  
Rasoul Sali ◽  
Nazanin Moradinasab ◽  
Shan Guleria ◽  
Lubaina Ehsan ◽  
Philip Fernandes ◽  
...  

The gold standard of histopathology for the diagnosis of Barrett’s esophagus (BE) is hindered by inter-observer variability among gastrointestinal pathologists. Deep learning-based approaches have shown promising results in the analysis of whole-slide tissue histopathology images (WSIs). We performed a comparative study to elucidate the characteristics and behaviors of different deep learning-based feature representation approaches for the WSI-based diagnosis of diseased esophageal architectures, namely, dysplastic and non-dysplastic BE. The results showed that if appropriate settings are chosen, the unsupervised feature representation approach is capable of extracting more relevant image features from WSIs to classify and locate the precursors of esophageal cancer compared to weakly supervised and fully supervised approaches.


2014 ◽  
Vol 11 (96) ◽  
pp. 20140303 ◽  
Author(s):  
E. C. Pegg ◽  
B. J. L. Kendrick ◽  
H. G. Pandit ◽  
H. S. Gill ◽  
D. W. Murray

The assessment of radiolucency around an implant is qualitative, poorly defined and has low agreement between clinicians. Accurate and repeatable assessment of radiolucency is essential to prevent misdiagnosis, minimize cases of unnecessary revision, and to correctly monitor and treat patients at risk of loosening and implant failure. The purpose of this study was to examine whether a semi-automated imaging algorithm could improve repeatability and enable quantitative assessment of radiolucency. Six surgeons assessed 38 radiographs of knees after unicompartmental knee arthroplasty for radiolucency, and results were compared with assessments made by the semi-automated program. Large variation was found between the surgeon results, with total agreement in only 9.4% of zones and a kappa value of 0.602; whereas the automated program had total agreement in 81.6% of zones and a kappa value of 0.802. The software had a ‘fair to excellent’ prediction of the presence or the absence of radiolucency, where the area under the curve of the receiver operating characteristic curves was 0.82 on average. The software predicted radiolucency equally well for cemented and cementless implants ( p = 0.996). The identification of radiolucency using an automated method is feasible and these results indicate that it could aid the definition and quantification of radiolucency.


Author(s):  
. Sushma ◽  
Roshny Jacob

Background: In spite of the Bethesda system 2001 (TBS 2001), formulating strict guidelines for reporting cervical smears, intra observer and inter observer variations are unavoidable and can be considered an inherent part of the reporting system. The implications of this variation are in the quality of performance of the reporting laboratory and in the patient management. Rescreening is a tool to reduce the variations and improve the quality of both the laboratory staff and laboratory as such. Rescreening by two or more experienced observers has helped in identifying new cases better. The present study aims to rescreen cervical smears by two independent observers, to compare the results of the two independent observers and to understand the implications of this variability on the quality of cervical smear reporting.Methods: 1000 consecutive cervical smears were rescreened by two experienced cyto-pathologists independently. Their findings were charted out and analyzed statistically for kappa value.Results: Initial reporting had identified 20 cases of neoplastic nature. First observer identified, in addition, 6 new cases and second observer identified 12 new cases. The inter observer variability of 6 cases showed a kappa value of 0.89.Conclusions: Rescreening is a safe way of picking up missed cases. Rescreening by two or more observers is better in identifying new cases. This helps in improving the quality of reporting personnel and the laboratory as well as in improving patient care.


2020 ◽  
Author(s):  
Jennifer P. Kieselmann ◽  
Clifton D. Fuller ◽  
Oliver J. Gurney-Champion ◽  
Uwe Oelfke

AbstractAdaptive online MRI-guided radiotherapy of head and neck cancer requires the reliable segmentation of the parotid glands as important organs at risk in clinically acceptable time frames. This can hardly be achieved by manual contouring. We therefore designed deep learning-based algorithms which automatically perform this task.Imaging data comprised two datasets: 27 patient MR images (T1-weighted and T2-weighted) and eight healthy volunteer MR images (T2-weighted), together with manually drawn contours by an expert. We used four different convolutional neural network (CNN) designs that each processed the data differently, varying the dimensionality of the input. We assessed the segmentation accuracy calculating the Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean surface distance (MSD) between manual and auto-generated contours. We benchmarked the developed methods by comparing to the inter-observer variability and to atlas-based segmentation. Additionally, we assessed the generalisability, strengths and limitations of deep learning-based compared to atlas-based methods in the independent volunteer test dataset.With a mean DSC of 0.85± 0.11 and mean MSD of 1.82 ±1.94 mm, a 2D CNN could achieve an accuracy comparable to that of an atlas-based method (DSC: 0.85 ±0.05, MSD: 1.67 ±1.21 mm) and the inter-observer variability (DSC: 0.84 ±0.06, MSD: 1.50 ±0.77 mm) but considerably faster (<1s v.s. 45 min). Adding information (adjacent slices, fully 3D or multi-modality) did not further improve the accuracy. With additional preprocessing steps, the 2D CNN was able to generalise well for the fully independent volunteer dataset (DSC: 0.79 ±0.10, MSD: 1.72 ±0.96 mm)We demonstrated the enormous potential for the application of CNNs to segment the parotid glands for online MRI-guided radiotherapy. The short computation times render deep learning-based methods suitable for online treatment planning workflows.


2020 ◽  
Vol 10 (17) ◽  
pp. 5984
Author(s):  
Rasha M. Al-Eidan ◽  
Hend Al-Khalifa ◽  
AbdulMalik Al-Salman

Traditional standards employed for pain assessment have many limitations. One such limitation is reliability linked to inter-observer variability. Therefore, there have been many approaches to automate the task of pain recognition. Recently, deep-learning methods have appeared to solve many challenges such as feature selection and cases with a small number of data sets. This study provides a systematic review of pain-recognition systems that are based on deep-learning models for the last two years. Furthermore, it presents the major deep-learning methods used in the review papers. Finally, it provides a discussion of the challenges and open issues.


Leukemia ◽  
2021 ◽  
Author(s):  
Jan-Niklas Eckardt ◽  
Jan Moritz Middeke ◽  
Sebastian Riechert ◽  
Tim Schmittmann ◽  
Anas Shekh Sulaiman ◽  
...  

AbstractThe evaluation of bone marrow morphology by experienced hematopathologists is essential in the diagnosis of acute myeloid leukemia (AML); however, it suffers from a lack of standardization and inter-observer variability. Deep learning (DL) can process medical image data and provides data-driven class predictions. Here, we apply a multi-step DL approach to automatically segment cells from bone marrow images, distinguish between AML samples and healthy controls with an area under the receiver operating characteristic (AUROC) of 0.9699, and predict the mutation status of Nucleophosmin 1 (NPM1)—one of the most common mutations in AML—with an AUROC of 0.92 using only image data from bone marrow smears. Utilizing occlusion sensitivity maps, we observed so far unreported morphologic cell features such as a pattern of condensed chromatin and perinuclear lightening zones in myeloblasts of NPM1-mutated AML and prominent nucleoli in wild-type NPM1 AML enabling the DL model to provide accurate class predictions.


Sign in / Sign up

Export Citation Format

Share Document