scholarly journals Screening Diabetic Retinopathy Using an Automated Retinal Image Analysis System in Mexico: Independent and Assistive use Cases (Preprint)

2020 ◽  
Author(s):  
Alejandro Noriega ◽  
Daniela Meizner ◽  
Dalia Camacho ◽  
Jennifer Enciso ◽  
Hugo Quiroz-Mercado ◽  
...  

BACKGROUND The automated screening of patients at risk of developing diabetic retinopathy (DR) represents an opportunity to improve their mid-term outcome, and lower the public expenditure associated with direct and indirect costs of common sight-threatening complications of diabetes. OBJECTIVE The present study, aims to develop and evaluate the performance of an automated deep learning–based system to classify retinal fundus images from international and Mexican patients, as referable and non-referable DR cases. In particular, the performance of the automated retina image analysis (ARIA) system is evaluated under an independent scheme (i.e. only ARIA screening) and two assistive schemes (i.e., hybrid ARIA + ophthalmologist screening), using a web-based platform for remote image analysis to determine and compare the sensibility and specificity of the three schemes. METHODS A randomized controlled experiment was performed where seventeen ophthalmologists were asked to classify a series of retinal fundus images under three different conditions: 1) screening the fundus image by themselves (solo), 2) screening the fundus image after being exposed to the retina image classification of the ARIA system (ARIA answer), and 3) screening the fundus image after being exposed to the classification of the ARIA system, as well as its level of confidence and an attention map highlighting the most important areas of interest in the image according to the ARIA system (ARIA explanation). The ophthalmologists’ classification in each condition and the result from the ARIA system were compared against a gold standard generated by consulting and aggregating the opinion of three retina specialists for each fundus image. RESULTS The ARIA system was able to classify referable vs. non-referable cases with an area under the Receiver Operating Characteristic curve (AUROC) of 98.0% and a sensitivity and specificity of 95.1% and 91.5% respectively, for international patient-cases; and an AUROC, sensitivity, and specificity of 98.3%, 95.2%, and 90.0% respectively for Mexican patient-cases. The results achieved outperformed the average performance of the seventeen ophthalmologists enrolled in the study. Additionally, the achieved results suggest that the ARIA system can be useful as an assistive tool, as significant sensitivity improvements were observed in the experimental condition where ophthalmologists were exposed to the ARIA’s system answer previous to their own classification (93.3%), compared to the sensitivity of the condition where participants assessed the images independently (87.3%). CONCLUSIONS These results demonstrate that both use cases of the ARIA system, independent and assistive, present a substantial opportunity for Latin American countries like Mexico, towards an efficient expansion of monitoring capacity for the early detection of diabetes-related blindness.

2020 ◽  
Author(s):  
Alejandro Noriega ◽  
Dalia Camacho ◽  
Daniela Meizner ◽  
Jennifer Enciso ◽  
Hugo Quiroz-Mercado ◽  
...  

Background: The automated screening of patients at risk of developing diabetic retinopathy (DR), represents an opportunity to improve their mid-term outcome and lower the public expenditure associated with direct and indirect costs of a common sight-threatening complication of diabetes. Objective: In the present study, we aim at developing and evaluating the performance of an automated deep learning-based system to classify retinal fundus images from international and Mexican patients, as referable and non-referable DR cases. In particular, we study the performance of the automated retina image analysis (ARIA) system under an independent scheme (i.e. only ARIA screening) and two assistive schemes (i.e., hybrid ARIA + ophthalmologist screening), using a web-based platform for remote image analysis. Methods: We ran a randomized controlled experiment where 17 ophthalmologists were asked to classify a series of retinal fundus images under three different conditions: 1) screening the fundus image by themselves (solo), 2) screening the fundus image after being exposed to the opinion of the ARIA system (ARIA answer), and 3) screening the fundus image after being exposed to the opinion of the ARIA system, as well as its level of confidence and an attention map highlighting the most important areas of interest in the image according to the ARIA system (ARIA explanation). The ophthalmologists' opinion in each condition and the opinion of the ARIA system were compared against a gold standard generated by consulting and aggregating the opinion of three retina specialists for each fundus image. Results: The ARIA system was able to classify referable vs. non-referable cases with an area under the Receiver Operating Characteristic curve (AUROC), sensitivity, and specificity of 98%, 95.1% and 91.5% respectively, for international patient-cases; and an AUROC, sensitivity, and specificity of 98.3%, 95.2%, 90% respectively for Mexican patient-cases. The results achieved on Mexican patient-cases outperformed the average performance of the 17 ophthalmologist participants of the study. We also find that the ARIA system can be useful as an assistive tool, as significant specificity improvements were observed in the experimental condition where participants were exposed to the answer of the ARIA system as a second opinion (93.3%), compared to the specificity of the condition where participants assessed the images independently (87.3%). Conclusions: These results demonstrate that both use cases of ARIA systems, independent and assistive, present a substantial opportunity for Latin American countries like Mexico towards an efficient expansion of monitoring capacity for the early detection of diabetes-related blindness.


2020 ◽  
Vol 14 ◽  
Author(s):  
Charu Bhardwaj ◽  
Shruti Jain ◽  
Meenakshi Sood

: Diabetic Retinopathy is the leading cause of vision impairment and its early stage diagnosis relies on regular monitoring and timely treatment for anomalies exhibiting subtle distinction among different severity grades. The existing Diabetic Retinopathy (DR) detection approaches are subjective, laborious and time consuming which can only be carried out by skilled professionals. All the patents related to DR detection and diagnoses applicable for our research problem were revised by the authors. The major limitation in classification of severities lies in poor discrimination between actual lesions, background noise and other anatomical structures. A robust and computationally efficient Two-Tier DR (2TDR) grading system is proposed in this paper to categorize various DR severities (mild, moderate and severe) present in retinal fundus images. In the proposed 2TDR grading system, input fundus image is subjected to background segmentation and the foreground fundus image is used for anomaly identification followed by GLCM feature extraction forming an image feature set. The novelty of our model lies in the exhaustive statistical analysis of extracted feature set to obtain optimal reduced image feature set employed further for classification. Classification outcomes are obtained for both extracted as well as reduced feature set to validate the significance of statistical analysis in severity classification and grading. For single tier classification stage, the proposed system achieves an overall accuracy of 100% by k- Nearest Neighbour (kNN) and Artificial Neural Network (ANN) classifier. In second tier classification stage an overall accuracy of 95.3% with kNN and 98.0% with ANN is achieved for all stages utilizing optimal reduced feature set. 2TDR system demonstrates overall improvement in classification performance by 2% and 6% for kNN and ANN respectively after feature set reduction, and also outperforms the accuracy obtained by other state of the art methods when applied to the MESSIDOR dataset. This application oriented work aids in accurate DR classification for effective diagnosis and timely treatment of severe retinal ailment.


2021 ◽  
Author(s):  
Abdullah Biran

Automatic Detection and Classification of Diabetic Retinopathy from Retinal Fundus Images by Abdullah Biran, Master of Applied Science, lectrical and computer engineering Department, Ryerson University, 2017. Diabetic Retinopathy (DR) is an eye disease that leads to blindness when it progresses to proliferative level. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness. In this thesis, an automatic algorithm for detecting diabetic retinopathy is presented. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. In addition, Support Vector Machine (SVM) classifier is used to classify retinal images into normal or abnormal cases of DR including non-proliferative (NPDR) or proliferative diabetic retinopathy (PDR). The proposed method has been tested on fundus images from Standard Diabetic Retinopathy Database (DIARETDB). The implementation of the presented methodology was done in MATLAB. The methodology is tested for sensitivity and accuracy.


Author(s):  
Juan Elisha Widyaya ◽  
Setia Budi

Diabetic retinopathy (DR) is eye diseases caused by diabetic mellitus or sugar diseases. If DR is detected in early stage, the blindness that follow can be prevented. Ophthalmologist or eye clinician usually decide the stage of DR from retinal fundus images. Careful examination of retinal fundus images is time consuming task and require experienced clinicians or ophthalmologist but a computer which has been trained to recognize the DR stages can diagnose and give result in real-time manner. One approach of algorithm to train a computer to recognize an image is deep learning Convolutional Neural Network (CNN). CNN allows a computer to learn the features of an image, in our case is retinal fundus image, automatically. Preprocessing is usually done before a CNN model is trained. In this study, four preprocessing were carried out. Of the four preprocessing tested, preprocessing with CLAHE and unsharp masking on the green channel of the retinal fundus image give the best results with an accuracy of 79.79%, 82.97% precision, 74.64% recall, and 95.81% AUC. The CNN architecture used is Inception v3.


2021 ◽  
Author(s):  
Abdullah Biran

Automatic Detection and Classification of Diabetic Retinopathy from Retinal Fundus Images by Abdullah Biran, Master of Applied Science, lectrical and computer engineering Department, Ryerson University, 2017. Diabetic Retinopathy (DR) is an eye disease that leads to blindness when it progresses to proliferative level. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness. In this thesis, an automatic algorithm for detecting diabetic retinopathy is presented. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. In addition, Support Vector Machine (SVM) classifier is used to classify retinal images into normal or abnormal cases of DR including non-proliferative (NPDR) or proliferative diabetic retinopathy (PDR). The proposed method has been tested on fundus images from Standard Diabetic Retinopathy Database (DIARETDB). The implementation of the presented methodology was done in MATLAB. The methodology is tested for sensitivity and accuracy.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
K. Somasundaram ◽  
P. Alli Rajendran

Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED) and Optimally Adjusted Morphological Operator (OAMO) effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR) method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.


Sign in / Sign up

Export Citation Format

Share Document