Deep Learning Model Based on Optical Coherence Tomography for Automated Detection of Pathologic Myopia: Prediction Model Development Study (Preprint)

2021 ◽  
Author(s):  
So Jin Park ◽  
Tae Hoon Ko ◽  
Chan Kee Park ◽  
Yong Chan Kim ◽  
In Young Choi

BACKGROUND Pathologic myopia is a disease that causes vision impairment and blindness. Therefore, it is essential to diagnose it in a timely manner. However, there is no standardized definition for pathologic myopia, and the interpretation of pathologic myopia by optical coherence tomography is subjective and requires considerable time and money. Therefore, there is a need for a diagnostic tool that can diagnose pathologic myopia in patients automatically and in a timely manner. OBJECTIVE The purpose of this study was to develop an algorithm that uses optical coherence tomography (OCT) to automatically diagnose patients with pathologic myopia who require treatment. METHODS This study was conducted using patient data from patients who underwent optical coherence tomography tests at the Ophthalmology Department of Incheon St. Mary's Hospital and Seoul St. Mary's Hospital from January 2012 to May 2020. To automatically diagnose pathologic myopia, a deep learning model was developed using 3D optical coherence tomography images. A model was developed using transfer learning based on four pre-trained convolutional neural networks (ResNet18, ResNext50, EfficientNetB0, EfficientNetB4). The performance of each model was evaluated and compared based on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). RESULTS Four models developed using test datasets were evaluated and compared. The model based on EfficientNetB4 showed the best performance (95% accuracy, 93% sensitivity, 96% specificity, and 98% AUROC). CONCLUSIONS In our study, we developed a deep learning model that can automatically diagnose pathologic myopia without segmentation of 3D optical coherence tomography images. Our deep learning model based on EfficientNetB4 demonstrated excellent performance in identifying pathologic myopia.

Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Jose M. Castillo T. ◽  
Muhammad Arif ◽  
Martijn P. A. Starmans ◽  
Wiro J. Niessen ◽  
Chris H. Bangma ◽  
...  

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.


Retina ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Gerardo Ledesma-Gil ◽  
Zaixing Mao ◽  
Jonathan Liu ◽  
Richard F. Spaide

2021 ◽  
Author(s):  
Ivan Potapenko ◽  
Mads Kristensen ◽  
Bo Thiesson ◽  
Tomas Ilginis ◽  
Torben Lykke Sørensen ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shu-Hui Wang ◽  
Xin-Jun Han ◽  
Jing Du ◽  
Zhen-Chang Wang ◽  
Chunwang Yuan ◽  
...  

Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps.


Sign in / Sign up

Export Citation Format

Share Document