scholarly journals A deep learning system can classify primary and metastatic cancers using passenger mutation patterns, but how does it really compare to current pathological diagnosis in a well-designed diagnostic accuracy study?

2021 ◽  
Author(s):  
Wendy A Cooper ◽  
Laveniya Satgunaseelan ◽  
Ruta Gupta

In a recent study published in Nature Communications by Jiao W et al, a deep learning classifier was trained to predict cancer type based on somatic passenger mutations identified using whole genome sequencing (WGS) as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium. The data show patterns of somatic passenger mutations differ between tumours with different cell of origin. Overall, the system had an accuracy of 91% in a cross-validation setting using the training set, and 88% and 83% using external validation sets of primary and metastatic tumours respectively. Surprisingly, this is claimed to be twice as accurate as trained pathologists, based on a 27 year old reference from 1993 prior to availability and routine utilisation of immunohistochemistry (IHC) in diagnostic pathology and is not a reflection of current diagnostic standards. We discuss the vital role of pathology in patient care and the importance of using international standards if deep learning methods are to be used in the clinical setting.

2017 ◽  
Author(s):  
Wei Jiao ◽  
Gurnit Atwal ◽  
Paz Polak ◽  
Rosa Karlic ◽  
Edwin Cuppen ◽  
...  

In cancer, the primary tumour's organ of origin and histopathology are the strongest determinants of its clinical behaviour, but in 3% of the time a cancer patient presents with metastatic tumour and no obvious primary. Challenges also arise when distinguishing a metastatic recurrence of a previously treated cancer from the emergence of a new one. Here we train a deep learning classifier to predict cancer type based on patterns of somatic passenger mutations detected in whole genome sequencing (WGS) of 2606 tumours representing 24 common cancer types. Our classifier achieves an accuracy of 91% on held-out tumor samples and 82% and 85% respectively on independent primary and metastatic samples, roughly double the accuracy of trained pathologists when presented with a metastatic tumour without knowledge of the primary. Surprisingly, adding information on driver mutations reduced classifier accuracy. Our results have immediate clinical applicability, underscoring how patterns of somatic passenger mutations encode the state of the cell of origin, and can inform future strategies to detect the source of cell-free circulating tumour DNA.


2020 ◽  
Vol 9 (7) ◽  
pp. 2167
Author(s):  
Ko Eun Kim ◽  
Joon Mo Kim ◽  
Ji Eun Song ◽  
Changwon Kee ◽  
Jong Chul Han ◽  
...  

This study aimed to develop and validate a deep learning system for diagnosing glaucoma using optical coherence tomography (OCT). A training set of 1822 eyes (332 control, 1490 glaucoma) with 7288 OCT images, an internal validation set of 425 eyes (104 control, 321 glaucoma) with 1700 images, and an external validation set of 355 eyes (108 control, 247 glaucoma) with 1420 images were included. Deviation and thickness maps of retinal nerve fiber layer (RNFL) and ganglion cell–inner plexiform layer (GCIPL) analyses were used to develop the deep learning system for glaucoma diagnosis based on the visual geometry group deep convolutional neural network (VGG-19) model. The diagnostic abilities of deep learning models using different OCT maps were evaluated, and the best model was compared with the diagnostic results produced by two glaucoma specialists. The glaucoma-diagnostic ability was highest when the deep learning system used the RNFL thickness map alone (area under the receiver operating characteristic curve (AUROC) 0.987), followed by the RNFL deviation map (AUROC 0.974), the GCIPL thickness map (AUROC 0.966), and the GCIPL deviation map (AUROC 0.903). Among combination sets, use of the RNFL and GCIPL deviation map showed the highest diagnostic ability, showing similar results when tested via an external validation dataset. The inclusion of the axial length did not significantly affect the diagnostic performance of the deep learning system. The location of glaucomatous damage showed generally high level of agreement between the heatmap and the diagnosis of glaucoma specialists, with 90.0% agreement when using the RNFL thickness map and 88.0% when using the GCIPL thickness map. In conclusion, our deep learning system showed high glaucoma-diagnostic abilities using OCT thickness and deviation maps. It also showed detection patterns similar to those of glaucoma specialists, showing promising results for future clinical application as an interpretable computer-aided diagnosis.


2021 ◽  
Author(s):  
Seongwon Na ◽  
Yusub Sung ◽  
Yousun Ko ◽  
Youngbin Shin ◽  
Junghyun Lee ◽  
...  

BACKGROUND Despite the dramatic increase in the use of medical imaging in various therapeutic fields of clinical trials, image quality check is still performed manually by image analysts, which requires a lot of manpower and time. OBJECTIVE This study aimed to develop a deep learning model that simultaneously identifies anatomical locations and contrast enhancement on medical images, with accuracy and clinical effectiveness validation, to support an automated image quality check. METHODS In this retrospective study, 1,669 computed tomography (CT) images with five specific anatomical locations were collected from Asan Medical Center and Kangdong Sacred Heart Hospital. To generate the ground truth, two radiologists reviewed the anatomical locations and presence of contrast enhancement using the collected data. A deep learning framework called ImageQC-net (Image Quality Check-network) with transfer learning was developed using an InceptioResNetV2 model. To evaluate their clinical effectiveness, the overall accuracy and time spent on image quality check of a conventional model and ImageQC-net were compared. RESULTS The ImageQC-net body part classification showed an excellent performance in both internal (precision, 100%; recall, 100%; and accuracy, 100%) and external validation sets (precision, 99.34%; recall, 99.33%; and accuracy, 99.33%). In addition, the contrast-enhanced classification performance achieved 100% precision, recall, and accuracy in the internal validation set and ~100% accuracy in the external dataset (precision, 99.76%; recall, 99.79%; and accuracy, 99.78%). When integrating the model that achieved the best performance, the overall accuracy was 99.1%. For clinical effectiveness, the time reduction by artificial intelligence (AI)-aided quality check of both analyst 1 and 2 (49.7% and 48.3% decrease, respectively) was statistically significant (p < 0.001). CONCLUSIONS Comprehensive AI techniques to identify body parts and contrast enhancement on CT images are highly accurate and can significantly reduce the time spent on image quality checks.


Face recognition plays a vital role in security purpose. In recent years, the researchers have focused on the pose illumination, face recognition, etc,. The traditional methods of face recognition focus on Open CV’s fisher faces which results in analyzing the face expressions and attributes. Deep learning method used in this proposed system is Convolutional Neural Network (CNN). Proposed work includes the following modules: [1] Face Detection [2] Gender Recognition [3] Age Prediction. Thus the results obtained from this work prove that real time age and gender detection using CNN provides better accuracy results compared to other existing approaches.


2020 ◽  
pp. bjophthalmol-2020-317825
Author(s):  
Yonghao Li ◽  
Weibo Feng ◽  
Xiujuan Zhao ◽  
Bingqian Liu ◽  
Yan Zhang ◽  
...  

Background/aimsTo apply deep learning technology to develop an artificial intelligence (AI) system that can identify vision-threatening conditions in high myopia patients based on optical coherence tomography (OCT) macular images.MethodsIn this cross-sectional, prospective study, a total of 5505 qualified OCT macular images obtained from 1048 high myopia patients admitted to Zhongshan Ophthalmic Centre (ZOC) from 2012 to 2017 were selected for the development of the AI system. The independent test dataset included 412 images obtained from 91 high myopia patients recruited at ZOC from January 2019 to May 2019. We adopted the InceptionResnetV2 architecture to train four independent convolutional neural network (CNN) models to identify the following four vision-threatening conditions in high myopia: retinoschisis, macular hole, retinal detachment and pathological myopic choroidal neovascularisation. Focal Loss was used to address class imbalance, and optimal operating thresholds were determined according to the Youden Index.ResultsIn the independent test dataset, the areas under the receiver operating characteristic curves were high for all conditions (0.961 to 0.999). Our AI system achieved sensitivities equal to or even better than those of retina specialists as well as high specificities (greater than 90%). Moreover, our AI system provided a transparent and interpretable diagnosis with heatmaps.ConclusionsWe used OCT macular images for the development of CNN models to identify vision-threatening conditions in high myopia patients. Our models achieved reliable sensitivities and high specificities, comparable to those of retina specialists and may be applied for large-scale high myopia screening and patient follow-up.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2020 ◽  
Vol 101 ◽  
pp. 209
Author(s):  
R. Baskaran ◽  
B. Ajay Rajasekaran ◽  
V. Rajinikanth
Keyword(s):  

2021 ◽  
pp. 136943322098663
Author(s):  
Diana Andrushia A ◽  
Anand N ◽  
Eva Lubloy ◽  
Prince Arulraj G

Health monitoring of concrete including, detecting defects such as cracking, spalling on fire affected concrete structures plays a vital role in the maintenance of reinforced cement concrete structures. However, this process mostly uses human inspection and relies on subjective knowledge of the inspectors. To overcome this limitation, a deep learning based automatic crack detection method is proposed. Deep learning is a vibrant strategy under computer vision field. The proposed method consists of U-Net architecture with an encoder and decoder framework. It performs pixel wise classification to detect the thermal cracks accurately. Binary Cross Entropy (BCA) based loss function is selected as the evaluation function. Trained U-Net is capable of detecting major thermal cracks and minor thermal cracks under various heating durations. The proposed, U-Net crack detection is a novel method which can be used to detect the thermal cracks developed on fire exposed concrete structures. The proposed method is compared with the other state-of-the-art methods and found to be accurate with 78.12% Intersection over Union (IoU).


Sign in / Sign up

Export Citation Format

Share Document