Two-Tier Grading System for NPDR Severities of Diabetic Retinopathy in Retinal Fundus Images

2020 ◽  
Vol 14 ◽  
Author(s):  
Charu Bhardwaj ◽  
Shruti Jain ◽  
Meenakshi Sood

: Diabetic Retinopathy is the leading cause of vision impairment and its early stage diagnosis relies on regular monitoring and timely treatment for anomalies exhibiting subtle distinction among different severity grades. The existing Diabetic Retinopathy (DR) detection approaches are subjective, laborious and time consuming which can only be carried out by skilled professionals. All the patents related to DR detection and diagnoses applicable for our research problem were revised by the authors. The major limitation in classification of severities lies in poor discrimination between actual lesions, background noise and other anatomical structures. A robust and computationally efficient Two-Tier DR (2TDR) grading system is proposed in this paper to categorize various DR severities (mild, moderate and severe) present in retinal fundus images. In the proposed 2TDR grading system, input fundus image is subjected to background segmentation and the foreground fundus image is used for anomaly identification followed by GLCM feature extraction forming an image feature set. The novelty of our model lies in the exhaustive statistical analysis of extracted feature set to obtain optimal reduced image feature set employed further for classification. Classification outcomes are obtained for both extracted as well as reduced feature set to validate the significance of statistical analysis in severity classification and grading. For single tier classification stage, the proposed system achieves an overall accuracy of 100% by k- Nearest Neighbour (kNN) and Artificial Neural Network (ANN) classifier. In second tier classification stage an overall accuracy of 95.3% with kNN and 98.0% with ANN is achieved for all stages utilizing optimal reduced feature set. 2TDR system demonstrates overall improvement in classification performance by 2% and 6% for kNN and ANN respectively after feature set reduction, and also outperforms the accuracy obtained by other state of the art methods when applied to the MESSIDOR dataset. This application oriented work aids in accurate DR classification for effective diagnosis and timely treatment of severe retinal ailment.

Author(s):  
Juan Elisha Widyaya ◽  
Setia Budi

Diabetic retinopathy (DR) is eye diseases caused by diabetic mellitus or sugar diseases. If DR is detected in early stage, the blindness that follow can be prevented. Ophthalmologist or eye clinician usually decide the stage of DR from retinal fundus images. Careful examination of retinal fundus images is time consuming task and require experienced clinicians or ophthalmologist but a computer which has been trained to recognize the DR stages can diagnose and give result in real-time manner. One approach of algorithm to train a computer to recognize an image is deep learning Convolutional Neural Network (CNN). CNN allows a computer to learn the features of an image, in our case is retinal fundus image, automatically. Preprocessing is usually done before a CNN model is trained. In this study, four preprocessing were carried out. Of the four preprocessing tested, preprocessing with CLAHE and unsharp masking on the green channel of the retinal fundus image give the best results with an accuracy of 79.79%, 82.97% precision, 74.64% recall, and 95.81% AUC. The CNN architecture used is Inception v3.


2020 ◽  
Author(s):  
Alejandro Noriega ◽  
Dalia Camacho ◽  
Daniela Meizner ◽  
Jennifer Enciso ◽  
Hugo Quiroz-Mercado ◽  
...  

Background: The automated screening of patients at risk of developing diabetic retinopathy (DR), represents an opportunity to improve their mid-term outcome and lower the public expenditure associated with direct and indirect costs of a common sight-threatening complication of diabetes. Objective: In the present study, we aim at developing and evaluating the performance of an automated deep learning-based system to classify retinal fundus images from international and Mexican patients, as referable and non-referable DR cases. In particular, we study the performance of the automated retina image analysis (ARIA) system under an independent scheme (i.e. only ARIA screening) and two assistive schemes (i.e., hybrid ARIA + ophthalmologist screening), using a web-based platform for remote image analysis. Methods: We ran a randomized controlled experiment where 17 ophthalmologists were asked to classify a series of retinal fundus images under three different conditions: 1) screening the fundus image by themselves (solo), 2) screening the fundus image after being exposed to the opinion of the ARIA system (ARIA answer), and 3) screening the fundus image after being exposed to the opinion of the ARIA system, as well as its level of confidence and an attention map highlighting the most important areas of interest in the image according to the ARIA system (ARIA explanation). The ophthalmologists' opinion in each condition and the opinion of the ARIA system were compared against a gold standard generated by consulting and aggregating the opinion of three retina specialists for each fundus image. Results: The ARIA system was able to classify referable vs. non-referable cases with an area under the Receiver Operating Characteristic curve (AUROC), sensitivity, and specificity of 98%, 95.1% and 91.5% respectively, for international patient-cases; and an AUROC, sensitivity, and specificity of 98.3%, 95.2%, 90% respectively for Mexican patient-cases. The results achieved on Mexican patient-cases outperformed the average performance of the 17 ophthalmologist participants of the study. We also find that the ARIA system can be useful as an assistive tool, as significant specificity improvements were observed in the experimental condition where participants were exposed to the answer of the ARIA system as a second opinion (93.3%), compared to the specificity of the condition where participants assessed the images independently (87.3%). Conclusions: These results demonstrate that both use cases of ARIA systems, independent and assistive, present a substantial opportunity for Latin American countries like Mexico towards an efficient expansion of monitoring capacity for the early detection of diabetes-related blindness.


2020 ◽  
Author(s):  
Alejandro Noriega ◽  
Daniela Meizner ◽  
Dalia Camacho ◽  
Jennifer Enciso ◽  
Hugo Quiroz-Mercado ◽  
...  

BACKGROUND The automated screening of patients at risk of developing diabetic retinopathy (DR) represents an opportunity to improve their mid-term outcome, and lower the public expenditure associated with direct and indirect costs of common sight-threatening complications of diabetes. OBJECTIVE The present study, aims to develop and evaluate the performance of an automated deep learning–based system to classify retinal fundus images from international and Mexican patients, as referable and non-referable DR cases. In particular, the performance of the automated retina image analysis (ARIA) system is evaluated under an independent scheme (i.e. only ARIA screening) and two assistive schemes (i.e., hybrid ARIA + ophthalmologist screening), using a web-based platform for remote image analysis to determine and compare the sensibility and specificity of the three schemes. METHODS A randomized controlled experiment was performed where seventeen ophthalmologists were asked to classify a series of retinal fundus images under three different conditions: 1) screening the fundus image by themselves (solo), 2) screening the fundus image after being exposed to the retina image classification of the ARIA system (ARIA answer), and 3) screening the fundus image after being exposed to the classification of the ARIA system, as well as its level of confidence and an attention map highlighting the most important areas of interest in the image according to the ARIA system (ARIA explanation). The ophthalmologists’ classification in each condition and the result from the ARIA system were compared against a gold standard generated by consulting and aggregating the opinion of three retina specialists for each fundus image. RESULTS The ARIA system was able to classify referable vs. non-referable cases with an area under the Receiver Operating Characteristic curve (AUROC) of 98.0% and a sensitivity and specificity of 95.1% and 91.5% respectively, for international patient-cases; and an AUROC, sensitivity, and specificity of 98.3%, 95.2%, and 90.0% respectively for Mexican patient-cases. The results achieved outperformed the average performance of the seventeen ophthalmologists enrolled in the study. Additionally, the achieved results suggest that the ARIA system can be useful as an assistive tool, as significant sensitivity improvements were observed in the experimental condition where ophthalmologists were exposed to the ARIA’s system answer previous to their own classification (93.3%), compared to the sensitivity of the condition where participants assessed the images independently (87.3%). CONCLUSIONS These results demonstrate that both use cases of the ARIA system, independent and assistive, present a substantial opportunity for Latin American countries like Mexico, towards an efficient expansion of monitoring capacity for the early detection of diabetes-related blindness.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
K. Somasundaram ◽  
P. Alli Rajendran

Retinal fundus images are widely used in diagnosing different types of eye diseases. The existing methods such as Feature Based Macular Edema Detection (FMED) and Optimally Adjusted Morphological Operator (OAMO) effectively detected the presence of exudation in fundus images and identified the true positive ratio of exudates detection, respectively. These mechanically detected exudates did not include more detailed feature selection technique to the system for detection of diabetic retinopathy. To categorize the exudates, Diabetic Fundus Image Recuperation (DFIR) method based on sliding window approach is developed in this work to select the features of optic cup in digital retinal fundus images. The DFIR feature selection uses collection of sliding windows with varying range to obtain the features based on the histogram value using Group Sparsity Nonoverlapping Function. Using support vector model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy disease level. The ranking of disease level on each candidate set provides a much promising result for developing practically automated and assisted diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, ranking efficiency, and feature selection time.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3922
Author(s):  
Sheeba Lal ◽  
Saeed Ur Rehman ◽  
Jamal Hussain Shah ◽  
Talha Meraj ◽  
Hafiz Tayyab Rauf ◽  
...  

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


Author(s):  
Mohammad Shorfuzzaman ◽  
M. Shamim Hossain ◽  
Abdulmotaleb El Saddik

Diabetic retinopathy (DR) is one of the most common causes of vision loss in people who have diabetes for a prolonged period. Convolutional neural networks (CNNs) have become increasingly popular for computer-aided DR diagnosis using retinal fundus images. While these CNNs are highly reliable, their lack of sufficient explainability prevents them from being widely used in medical practice. In this article, we propose a novel explainable deep learning ensemble model where weights from different models are fused into a single model to extract salient features from various retinal lesions found on fundus images. The extracted features are then fed to a custom classifier for the final diagnosis of DR severity level. The model is trained on an APTOS dataset containing retinal fundus images of various DR grades using a cyclical learning rates strategy with an automatic learning rate finder for decaying the learning rate to improve model accuracy. We develop an explainability approach by leveraging gradient-weighted class activation mapping and shapely adaptive explanations to highlight the areas of fundus images that are most indicative of different DR stages. This allows ophthalmologists to view our model's decision in a way that they can understand. Evaluation results using three different datasets (APTOS, MESSIDOR, IDRiD) show the effectiveness of our model, achieving superior classification rates with a high degree of precision (0.970), sensitivity (0.980), and AUC (0.978). We believe that the proposed model, which jointly offers state-of-the-art diagnosis performance and explainability, will address the black-box nature of deep CNN models in robust detection of DR grading.


2020 ◽  
Vol 9 (2) ◽  
pp. 34
Author(s):  
Adrian Galdran ◽  
Jihed Chelbi ◽  
Riadh Kobi ◽  
José Dolz ◽  
Hervé Lombaert ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document