scholarly journals Pengaruh Preprocessing Terhadap Klasifikasi Diabetic Retinopathy dengan Pendekatan Transfer Learning Convolutional Neural Network

Author(s):  
Juan Elisha Widyaya ◽  
Setia Budi

Diabetic retinopathy (DR) is eye diseases caused by diabetic mellitus or sugar diseases. If DR is detected in early stage, the blindness that follow can be prevented. Ophthalmologist or eye clinician usually decide the stage of DR from retinal fundus images. Careful examination of retinal fundus images is time consuming task and require experienced clinicians or ophthalmologist but a computer which has been trained to recognize the DR stages can diagnose and give result in real-time manner. One approach of algorithm to train a computer to recognize an image is deep learning Convolutional Neural Network (CNN). CNN allows a computer to learn the features of an image, in our case is retinal fundus image, automatically. Preprocessing is usually done before a CNN model is trained. In this study, four preprocessing were carried out. Of the four preprocessing tested, preprocessing with CLAHE and unsharp masking on the green channel of the retinal fundus image give the best results with an accuracy of 79.79%, 82.97% precision, 74.64% recall, and 95.81% AUC. The CNN architecture used is Inception v3.

2020 ◽  
Vol 14 ◽  
Author(s):  
Charu Bhardwaj ◽  
Shruti Jain ◽  
Meenakshi Sood

: Diabetic Retinopathy is the leading cause of vision impairment and its early stage diagnosis relies on regular monitoring and timely treatment for anomalies exhibiting subtle distinction among different severity grades. The existing Diabetic Retinopathy (DR) detection approaches are subjective, laborious and time consuming which can only be carried out by skilled professionals. All the patents related to DR detection and diagnoses applicable for our research problem were revised by the authors. The major limitation in classification of severities lies in poor discrimination between actual lesions, background noise and other anatomical structures. A robust and computationally efficient Two-Tier DR (2TDR) grading system is proposed in this paper to categorize various DR severities (mild, moderate and severe) present in retinal fundus images. In the proposed 2TDR grading system, input fundus image is subjected to background segmentation and the foreground fundus image is used for anomaly identification followed by GLCM feature extraction forming an image feature set. The novelty of our model lies in the exhaustive statistical analysis of extracted feature set to obtain optimal reduced image feature set employed further for classification. Classification outcomes are obtained for both extracted as well as reduced feature set to validate the significance of statistical analysis in severity classification and grading. For single tier classification stage, the proposed system achieves an overall accuracy of 100% by k- Nearest Neighbour (kNN) and Artificial Neural Network (ANN) classifier. In second tier classification stage an overall accuracy of 95.3% with kNN and 98.0% with ANN is achieved for all stages utilizing optimal reduced feature set. 2TDR system demonstrates overall improvement in classification performance by 2% and 6% for kNN and ANN respectively after feature set reduction, and also outperforms the accuracy obtained by other state of the art methods when applied to the MESSIDOR dataset. This application oriented work aids in accurate DR classification for effective diagnosis and timely treatment of severe retinal ailment.


Diabetic Retinopathy (DR) is a microvascular complication of Diabetes that can lead to blindness if it is severe. Microaneurysm (MA) is the initial and main symptom of DR. In this paper, an automatic detection of DR from retinal fundus images of publicly available dataset has been proposed using transfer learning with pre-trained model VGG16 based on Convolutional Neural Network (CNN). Our method achieves improvement in accuracy for MA detection using retinal fundus images in prediction of Diabetic Retinopathy.


2018 ◽  
Vol 7 (4.33) ◽  
pp. 110
Author(s):  
Ahmad Firdaus Ahmad Fadzil ◽  
Zaaba Ahmad ◽  
Noor Elaiza Abd Khalid ◽  
Shafaf Ibrahim

Retinal fundus image is a crucial tool for ophthalmologists to diagnose eye-related diseases. These images provide visual information of the interior layer of the retina structures such as optic disc, optic cup, blood vessels and macula that can assist ophthalmologist in determining the health of an eye. Segmentation of blood vessels in fundus images is one of the most fundamental phase in detecting diseases such as diabetic retinopathy. However, the ambiguity of the retina structures in the retinal fundus images presents a challenge for researcher to segment the blood vessels. Extensive pre-processing and training of the images is necessary for precise segmentation, which is very intricate and laborious. This paper proposes the implementation of object-oriented-based metadata (OOM) structures of each pixel in the retinal fundus images. These structures comprise of additional metadata towards the conventional red, green, and blue data for each pixel within the images. The segmentation of the blood vessels in the retinal fundus images are performed by considering these additional metadata that enunciates the location, color spaces, and neighboring pixels of each individual pixel. From the results, it is shown that accurate segmentation of retinal fundus blood vessels can be achieved by purely employing straightforward thresholding method via the OOM structures without extensive pre-processing image processing technique or data training.      


2020 ◽  
Author(s):  
Alejandro Noriega ◽  
Dalia Camacho ◽  
Daniela Meizner ◽  
Jennifer Enciso ◽  
Hugo Quiroz-Mercado ◽  
...  

Background: The automated screening of patients at risk of developing diabetic retinopathy (DR), represents an opportunity to improve their mid-term outcome and lower the public expenditure associated with direct and indirect costs of a common sight-threatening complication of diabetes. Objective: In the present study, we aim at developing and evaluating the performance of an automated deep learning-based system to classify retinal fundus images from international and Mexican patients, as referable and non-referable DR cases. In particular, we study the performance of the automated retina image analysis (ARIA) system under an independent scheme (i.e. only ARIA screening) and two assistive schemes (i.e., hybrid ARIA + ophthalmologist screening), using a web-based platform for remote image analysis. Methods: We ran a randomized controlled experiment where 17 ophthalmologists were asked to classify a series of retinal fundus images under three different conditions: 1) screening the fundus image by themselves (solo), 2) screening the fundus image after being exposed to the opinion of the ARIA system (ARIA answer), and 3) screening the fundus image after being exposed to the opinion of the ARIA system, as well as its level of confidence and an attention map highlighting the most important areas of interest in the image according to the ARIA system (ARIA explanation). The ophthalmologists' opinion in each condition and the opinion of the ARIA system were compared against a gold standard generated by consulting and aggregating the opinion of three retina specialists for each fundus image. Results: The ARIA system was able to classify referable vs. non-referable cases with an area under the Receiver Operating Characteristic curve (AUROC), sensitivity, and specificity of 98%, 95.1% and 91.5% respectively, for international patient-cases; and an AUROC, sensitivity, and specificity of 98.3%, 95.2%, 90% respectively for Mexican patient-cases. The results achieved on Mexican patient-cases outperformed the average performance of the 17 ophthalmologist participants of the study. We also find that the ARIA system can be useful as an assistive tool, as significant specificity improvements were observed in the experimental condition where participants were exposed to the answer of the ARIA system as a second opinion (93.3%), compared to the specificity of the condition where participants assessed the images independently (87.3%). Conclusions: These results demonstrate that both use cases of ARIA systems, independent and assistive, present a substantial opportunity for Latin American countries like Mexico towards an efficient expansion of monitoring capacity for the early detection of diabetes-related blindness.


Sign in / Sign up

Export Citation Format

Share Document