scholarly journals Development and Validation of a Deep Learning System for Diagnosing Glaucoma Using Optical Coherence Tomography

2020 ◽  
Vol 9 (7) ◽  
pp. 2167
Author(s):  
Ko Eun Kim ◽  
Joon Mo Kim ◽  
Ji Eun Song ◽  
Changwon Kee ◽  
Jong Chul Han ◽  
...  

This study aimed to develop and validate a deep learning system for diagnosing glaucoma using optical coherence tomography (OCT). A training set of 1822 eyes (332 control, 1490 glaucoma) with 7288 OCT images, an internal validation set of 425 eyes (104 control, 321 glaucoma) with 1700 images, and an external validation set of 355 eyes (108 control, 247 glaucoma) with 1420 images were included. Deviation and thickness maps of retinal nerve fiber layer (RNFL) and ganglion cell–inner plexiform layer (GCIPL) analyses were used to develop the deep learning system for glaucoma diagnosis based on the visual geometry group deep convolutional neural network (VGG-19) model. The diagnostic abilities of deep learning models using different OCT maps were evaluated, and the best model was compared with the diagnostic results produced by two glaucoma specialists. The glaucoma-diagnostic ability was highest when the deep learning system used the RNFL thickness map alone (area under the receiver operating characteristic curve (AUROC) 0.987), followed by the RNFL deviation map (AUROC 0.974), the GCIPL thickness map (AUROC 0.966), and the GCIPL deviation map (AUROC 0.903). Among combination sets, use of the RNFL and GCIPL deviation map showed the highest diagnostic ability, showing similar results when tested via an external validation dataset. The inclusion of the axial length did not significantly affect the diagnostic performance of the deep learning system. The location of glaucomatous damage showed generally high level of agreement between the heatmap and the diagnosis of glaucoma specialists, with 90.0% agreement when using the RNFL thickness map and 88.0% when using the GCIPL thickness map. In conclusion, our deep learning system showed high glaucoma-diagnostic abilities using OCT thickness and deviation maps. It also showed detection patterns similar to those of glaucoma specialists, showing promising results for future clinical application as an interpretable computer-aided diagnosis.

2021 ◽  
Author(s):  
Viney Gupta ◽  
Shweta Birla ◽  
Toshit Varshney ◽  
Bindu I Somarajan ◽  
Shikha Gupta ◽  
...  

Abstract Objective: To predict the presence of Angle Dysgenesis on Anterior Segment Optical Coherence Tomography (ADoA) using deep learning and to correlate ADoA with mutations in known glaucoma genes. Design: A cross-sectional observational study. Participants: Eight hundred, high definition anterior segment optical coherence tomography (ASOCT) B-scans were included, out of which 340 images (One scan per eye) were used to build the machine learning (ML) model and the rest were used for validation of ADoA. Out of 340 images, 170 scans included PCG (n=27), JOAG (n=86) and POAG (n=57) eyes and the rest were controls. The genetic validation dataset consisted of another 393 images of patients with known mutations compared with 320 images of healthy controls Methods: ADoA was defined as the absence of Schlemm's canal(SC), the presence of extensive hyper-reflectivity over the region of trabecular meshwork or a hyper-reflective membrane (HM) over the region of the trabecular meshwork. Deep learning was used to classify a given ASOCT image as either having angle dysgenesis or not. ADoA was then specifically looked for, on ASOCT images of patients with mutations in the known genes for glaucoma (MYOC, CYP1B1, FOXC1 and LTBP2). Main Outcome measures: Using Deep learning to identify ADoA in patients with known gene mutations. Results: Our three optimized deep learning models showed an accuracy > 95%, specificity >97% and sensitivity >96% in detecting angle dysgenesis on ASOCT in the internal test dataset. The area under receiver operating characteristic (AUROC) curve, based on the external validation cohort were 0.91 (95% CI, 0.88 to 0.95), 0.80 (95% CI, 0.75 to 0.86) and 0.86 (95% CI, 0.80 to 0.91) for the three models. Amongst the patients with known gene mutations, ADoA was observed among all the patients with MYOC mutations, as it was also observed among those with CYP1B1, FOXC1 and with LTBP2 mutations compared to only 5% of those healthy controls (with no glaucoma mutations). Conclusions: Three deep learning models were developed for a consensus-based outcome to objectively identify ADoA among glaucoma patients. All patients with MYOC mutations had ADoA as predicted by the models.


2020 ◽  
pp. bjophthalmol-2020-317825
Author(s):  
Yonghao Li ◽  
Weibo Feng ◽  
Xiujuan Zhao ◽  
Bingqian Liu ◽  
Yan Zhang ◽  
...  

Background/aimsTo apply deep learning technology to develop an artificial intelligence (AI) system that can identify vision-threatening conditions in high myopia patients based on optical coherence tomography (OCT) macular images.MethodsIn this cross-sectional, prospective study, a total of 5505 qualified OCT macular images obtained from 1048 high myopia patients admitted to Zhongshan Ophthalmic Centre (ZOC) from 2012 to 2017 were selected for the development of the AI system. The independent test dataset included 412 images obtained from 91 high myopia patients recruited at ZOC from January 2019 to May 2019. We adopted the InceptionResnetV2 architecture to train four independent convolutional neural network (CNN) models to identify the following four vision-threatening conditions in high myopia: retinoschisis, macular hole, retinal detachment and pathological myopic choroidal neovascularisation. Focal Loss was used to address class imbalance, and optimal operating thresholds were determined according to the Youden Index.ResultsIn the independent test dataset, the areas under the receiver operating characteristic curves were high for all conditions (0.961 to 0.999). Our AI system achieved sensitivities equal to or even better than those of retina specialists as well as high specificities (greater than 90%). Moreover, our AI system provided a transparent and interpretable diagnosis with heatmaps.ConclusionsWe used OCT macular images for the development of CNN models to identify vision-threatening conditions in high myopia patients. Our models achieved reliable sensitivities and high specificities, comparable to those of retina specialists and may be applied for large-scale high myopia screening and patient follow-up.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Reza Mirshahi ◽  
Pasha Anvari ◽  
Hamid Riazi-Esfahani ◽  
Mahsa Sardarinia ◽  
Masood Naseripour ◽  
...  

AbstractThe purpose of this study was to introduce a new deep learning (DL) model for segmentation of the fovea avascular zone (FAZ) in en face optical coherence tomography angiography (OCTA) and compare the results with those of the device’s built-in software and manual measurements in healthy subjects and diabetic patients. In this retrospective study, FAZ borders were delineated in the inner retinal slab of 3 × 3 enface OCTA images of 131 eyes of 88 diabetic patients and 32 eyes of 18 healthy subjects. To train a deep convolutional neural network (CNN) model, 126 enface OCTA images (104 eyes with diabetic retinopathy and 22 normal eyes) were used as training/validation dataset. Then, the accuracy of the model was evaluated using a dataset consisting of OCTA images of 10 normal eyes and 27 eyes with diabetic retinopathy. The CNN model was based on Detectron2, an open-source modular object detection library. In addition, automated FAZ measurements were conducted using the device’s built-in commercial software, and manual FAZ delineation was performed using ImageJ software. Bland–Altman analysis was used to show 95% limit of agreement (95% LoA) between different methods. The mean dice similarity coefficient of the DL model was 0.94 ± 0.04 in the testing dataset. There was excellent agreement between automated, DL model and manual measurements of FAZ in healthy subjects (95% LoA of − 0.005 to 0.026 mm2 between automated and manual measurement and 0.000 to 0.009 mm2 between DL and manual FAZ area). In diabetic eyes, the agreement between DL and manual measurements was excellent (95% LoA of − 0.063 to 0.095), however, there was a poor agreement between the automated and manual method (95% LoA of − 0.186 to 0.331). The presence of diabetic macular edema and intraretinal cysts at the fovea were associated with erroneous FAZ measurements by the device’s built-in software. In conclusion, the DL model showed an excellent accuracy in detection of FAZ border in enfaces OCTA images of both diabetic patients and healthy subjects. The DL and manual measurements outperformed the automated measurements of the built-in software.


2021 ◽  
Author(s):  
Adrit Rao ◽  
Harvey A. Fishman

Identifying diseases in Optical Coherence Tomography (OCT) images using Deep Learning models and methods is emerging as a powerful technique to enhance clinical diagnosis. Identifying macular diseases in the eye at an early stage and preventing misdiagnosis is crucial. The current methods developed for OCT image analysis have not yet been integrated into an accessible form-factor that can be utilized in a real-life scenario by Ophthalmologists. Additionally, current methods do not employ robust multiple metric feedback. This paper proposes a highly accurate smartphone-based Deep Learning system, OCTAI, that allows a user to take an OCT picture and receive real-time feedback through on-device inference. OCTAI analyzes the input OCT image in three different ways: (1) full image analysis, (2) quadrant based analysis, and (3) disease detection based analysis. With these three analysis methods, along with an Ophthalmologist's interpretation, a robust diagnosis can potentially be made. The ultimate goal of OCTAI is to assist Ophthalmologists in making a diagnosis through a digital second opinion and enabling them to cross-check their diagnosis before making a decision based on purely manual analysis of OCT images. OCTAI has the potential to allow Ophthalmologists to improve their diagnosis and may reduce misdiagnosis rates, leading to faster treatment of diseases.


2021 ◽  
Author(s):  
Fangyao Tang ◽  
Xi Wang ◽  
An-ran Ran ◽  
Carmen KM Chan ◽  
Mary Ho ◽  
...  

<a><b>Objective:</b></a> Diabetic macular edema (DME) is the primary cause of vision loss among individuals with diabetes mellitus (DM). We developed, validated, and tested a deep-learning (DL) system for classifying DME using images from three common commercially available optical coherence tomography (OCT) devices. <p><b>Research Design and Methods:</b> We trained and validated two versions of a multi-task convolution neural network (CNN) to classify DME (center-involved DME [CI-DME], non-CI-DME, or absence of DME) using three-dimensional (3D) volume-scans and two-dimensional (2D) B-scans respectively. For both 3D and 2D CNNs, we employed the residual network (ResNet) as the backbone. For the 3D CNN, we used a 3D version of ResNet-34 with the last fully connected layer removed as the feature extraction module. A total of 73,746 OCT images were used for training and primary validation. External testing was performed using 26,981 images across seven independent datasets from Singapore, Hong Kong, the US, China, and Australia. </p> <p><b>Results:</b> In classifying the presence or absence of DME, the DL system achieved area under the receiver operating characteristic curves (AUROCs) of 0.937 (95% CI 0.920–0.954), 0.958 (0.930–0.977), and 0.965 (0.948–0.977) for primary dataset obtained from Cirrus, Spectralis, and Triton OCTs respectively, in addition to AUROCs greater than 0.906 for the external datasets. For the further classification of the CI-DME and non-CI-DME subgroups, the AUROCs were 0.968 (0.940–0.995), 0.951 (0.898–0.982), and 0.975 (0.947–0.991) for the primary dataset and greater than 0.894 for the external datasets. </p> <p><b>Conclusion:</b> We demonstrated excellent performance with a DL system for the automated classification of DME, highlighting its potential as a promising second-line screening tool for patients with DM, which may potentially create a more effective triaging mechanism to eye clinics. </p>


2020 ◽  
pp. bjophthalmol-2019-315715
Author(s):  
Dong Hyun Kang ◽  
Young Hoon Hwang

PurposeTo evaluate the effect of baseline test selection on progression detection of circumpapillary retinal nerve fibre layer (RNFL) and macular ganglion cell-inner plexiform layer (GCIPL) in glaucomatous eyes by optical coherence tomography (OCT)-guided progression analysis (GPA).MethodsA total of 53 eyes with either RNFL or GCIPL progression determined using OCT-GPA were included. Three different baseline conditions were created by dividing eight serial OCT tests from each eye into three sets. Specifically, these sets presented baseline tests at exams 1–2 (1st set), 2–3 (2nd set) and 3–4 (3rd set), respectively. Agreement on progression detection was defined as the presence of ‘Possible Loss’ or ‘Likely Loss’ in the 2nd or 3rd sets at the same location in the 1st set.ResultsThe proportion of eyes with agreement on progression detection was 47.1%, 20.0% and 31.0% for RNFL ‘thickness map progression’, ‘thickness profiles progression’ and ‘average thickness progression’, respectively. In GCIPL ‘thickness map progression’ and ‘average thickness progression’, 53.8% and 62.8% of eyes showed agreement, respectively. Eyes with disagreement showed a greater change in thickness (slope of change in the 3rd set−1st set) compared to the eyes with agreement (p<0.05), with the exception of RNFL ‘thickness profiles progression’ (p=0.064).ConclusionGlaucoma progression detection by OCT-GPA was affected by baseline test selection, especially in eyes with a greater reduction in progression. GCIPL thickness was less influenced by baseline test selection compared to RNFL thickness.


2019 ◽  
Vol 203 ◽  
pp. 37-45 ◽  
Author(s):  
Huazhu Fu ◽  
Mani Baskaran ◽  
Yanwu Xu ◽  
Stephen Lin ◽  
Damon Wing Kee Wong ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document