scholarly journals Validation of Automated Screening for Referable Diabetic Retinopathy With an Autonomous Diagnostic Artificial Intelligence System in a Spanish Population

2020 ◽  
pp. 193229682090621
Author(s):  
Abhay Shah ◽  
Warren Clarida ◽  
Ryan Amelon ◽  
Maria C. Hernaez-Ortega ◽  
Amparo Navea ◽  
...  

Purpose: The purpose of this study is to compare the diagnostic performance of an autonomous artificial intelligence (AI) system for the diagnosis of referable diabetic retinopathy (RDR) to manual grading by Spanish ophthalmologists. Methods: Subjects with type 1 and 2 diabetes participated in a diabetic retinopathy (DR) screening program in 2011 to 2012 in Valencia (Spain), and two images per eye were collected according to their standard protocol. Mydriatic drops were used in all patients. Retinal images—one disc and one fovea centered—were obtained under the Medical Research Ethics Committee approval and de-identified. Exams were graded by the autonomous AI system (IDx-DR, Coralville, Iowa, United States), and manually by masked ophthalmologists using adjudication. The outputs of the AI system and manual adjudicated grading were compared using sensitivity and specificity for diagnosis of both RDR and vision-threatening diabetic retinopathy (VTDR). Results: A total of 2680 subjects were included in the study. According to manual grading, prevalence of RDR was 111/2680 (4.14%) and of VTDR was 69/2680 (2.57%). Against manual grading, the AI system had a 100% (95% confidence interval [CI]: 97%-100%) sensitivity and 81.82% (95% CI: 80%-83%) specificity for RDR, and a 100% (95% CI: 95%-100%) sensitivity and 94.64% (95% CI: 94%-95%) specificity for VTDR. Conclusion: Compared to manual grading by ophthalmologists, the autonomous diagnostic AI system had high sensitivity (100%) and specificity (82%) for diagnosing RDR and macular edema in people with diabetes in a screening program. Because of its immediate, point of care diagnosis, autonomous diagnostic AI has the potential to increase the accessibility of RDR screening in primary care settings.

2020 ◽  
pp. 193229682091428
Author(s):  
Jorge Cuadros

The study by Shah et al published in this issue of the Journal of Diabetes Science and Technology validates the IDx autonomous diabetic retinopathy (DR) screening program in a real-world setting. The study found high sensitivity (100%) but low specificity (82%) for referable DR. The resulting positive predictive value of 19% means that four out of five patients without referable DR would be referred to ophthalmology causing a significant burden to ophthalmologists, primary care clinics, and patients. Artificial intelligence programs that provide better specificity, multiple levels of DR, and annotations of where lesions are located in the retina may function better than a simple referral/no referral output. This will allow for better engagement of patients through the difficult process of adhering to treatment recommendations and control their diabetes.


2021 ◽  
Vol 6 ◽  
pp. 12-12
Author(s):  
Alauddin Bhuiyan ◽  
Arun Govindaiah ◽  
Sharmina Alauddin ◽  
Oscar Otero-Marquez ◽  
R. Theodore Smith

2020 ◽  
Vol 8 (1) ◽  
pp. e000892 ◽  
Author(s):  
Bhavana Sosale ◽  
Sosale Ramachandra Aravind ◽  
Hemanth Murthy ◽  
Srikanth Narayana ◽  
Usha Sharma ◽  
...  

IntroductionThe aim of this study is to evaluate the performance of the offline smart phone-based Medios artificial intelligence (AI) algorithm in the diagnosis of diabetic retinopathy (DR) using non-mydriatic (NM) retinal images.MethodsThis cross-sectional study prospectively enrolled 922 individuals with diabetes mellitus. NM retinal images (disc and macula centered) from each eye were captured using the Remidio NM fundus-on-phone (FOP) camera. The images were run offline and the diagnosis of the AI was recorded (DR present or absent). The diagnosis of the AI was compared with the image diagnosis of five retina specialists (majority diagnosis considered as ground truth).ResultsAnalysis included images from 900 individuals (252 had DR). For any DR, the sensitivity and specificity of the AI algorithm was found to be 83.3% (95% CI 80.9% to 85.7%) and 95.5% (95% CI 94.1% to 96.8%). The sensitivity and specificity of the AI algorithm in detecting referable DR (RDR) was 93% (95% CI 91.3% to 94.7%) and 92.5% (95% CI 90.8% to 94.2%).ConclusionThe Medios AI has a high sensitivity and specificity in the detection of RDR using NM retinal images.


2021 ◽  
Vol 10 (11) ◽  
pp. 2352
Author(s):  
Andrzej Grzybowski ◽  
Piotr Brona

Background: The prevalence of diabetic retinopathy (DR) is expected to increase. This will put an increasing strain on health care resources. Recently, artificial intelligence-based, autonomous DR screening systems have been developed. A direct comparison between different systems is often difficult and only two such comparisons have been published so far. As different screening solutions are now available commercially, with more in the pipeline, choosing a system is not a simple matter. Based on the images gathered in a local DR screening program we performed a retrospective comparison of IDx-DR and Retinalyze. Methods: We chose a non-representative sample of all referable DR positive screening subjects (n = 60) and a random selection of DR negative patient images (n = 110). Only subjects with four good quality, 45-degree field of view images, a macula-centered and disc-centered image from both eyes were chosen for comparison. The images were captured by a Topcon NW-400 fundus camera, without mydriasis. The images were previously graded by a single ophthalmologist. For the purpose of this comparison, we assumed two screening strategies for Retinalyze—where either one or two out of the four images needed to be marked positive by the system for an overall positive result at the patient level. Results: Percentage agreement with a single reader in DR positive and DR negative cases respectively was: 93.3%, 95.5% for IDx-DR; 89.7% and 71.8% for Retinalyze strategy 1; 74.1% and 93.6% for Retinalyze under strategy 2. Conclusions: Both systems were able to analyse the vast majority of images. Both systems were easy to set up and use. There were several limitations to the current pilot study, concerning sample choice and the reference grading that need to be addressed before attempting a more robust future study.


2021 ◽  
pp. 193229682199937
Author(s):  
Nikita Mokhashi ◽  
Julia Grachevskaya ◽  
Lorrie Cheng ◽  
Daohai Yu ◽  
Xiaoning Lu ◽  
...  

Introduction: Artificial intelligence (AI) diabetic retinopathy (DR) software has the potential to decrease time spent by clinicians on image interpretation and expand the scope of DR screening. We performed a retrospective review to compare Eyenuk’s EyeArt software (Woodland Hills, CA) to Temple Ophthalmology optometry grading using the International Classification of Diabetic Retinopathy scale. Methods: Two hundred and sixty consecutive diabetic patients from the Temple Faculty Practice Internal Medicine clinic underwent 2-field retinal imaging. Classifications of the images by the software and optometrist were analyzed using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and McNemar’s test. Ungradable images were analyzed to identify relationships with HbA1c, age, and ethnicity. Disagreements and a sample of 20% of agreements were adjudicated by a retina specialist. Results: On patient level comparison, sensitivity for the software was 100%, while specificity was 77.78%. PPV was 19.15%, and NPV was 100%. The 38 disagreements between software and optometrist occurred when the optometrist classified a patient’s images as non-referable while the software classified them as referable. Of these disagreements, a retina specialist agreed with the optometrist 57.9% the time (22/38). Of the agreements, the retina specialist agreed with both the program and the optometrist 96.7% of the time (28/29). There was a significant difference in numbers of ungradable photos in older patients (≥60) vs younger patients (<60) (p=0.003). Conclusions: The AI program showed high sensitivity with acceptable specificity for a screening algorithm. The high NPV indicates that the software is unlikely to miss DR but may refer patients unnecessarily.


Sign in / Sign up

Export Citation Format

Share Document