scholarly journals Lesion Detection and Classification techniques for Diabetic Retinopathy

Diabetes is a worldwide spread disease which is increasing rapidly and found in all age people. Diabetic Retinopathy is a retinal abnormality caused by diabetes. Which can lead to permanent vision loss or blindness. As Diabetic Retinopathy pathology damages retina without any early symptoms, it is very important to do the regular screening of retina and detection of Retinopathy. Ophthalmologist does the identification of Retinopathy manually which is time consuming and error prone. Hence, there is a need for early and correct automatic detection of Diabetic Retinopathy. Many researches have done for detection using Image Processing, Artificial Intelligence, Neural Network and Machine Learning. This paper presents a review on Diabetic Retinopathy Detection systems. This review highlights the public datasets available for the evaluation of the detection systems with different segmentation and classification techniques. We have discussed the analysis of different classification and segmentation techniques used in DR detection.

Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 816
Author(s):  
Pingping Liu ◽  
Xiaokang Yang ◽  
Baixin Jin ◽  
Qiuzhan Zhou

Diabetic retinopathy (DR) is a common complication of diabetes mellitus (DM), and it is necessary to diagnose DR in the early stages of treatment. With the rapid development of convolutional neural networks in the field of image processing, deep learning methods have achieved great success in the field of medical image processing. Various medical lesion detection systems have been proposed to detect fundus lesions. At present, in the image classification process of diabetic retinopathy, the fine-grained properties of the diseased image are ignored and most of the retinopathy image data sets have serious uneven distribution problems, which limits the ability of the network to predict the classification of lesions to a large extent. We propose a new non-homologous bilinear pooling convolutional neural network model and combine it with the attention mechanism to further improve the network’s ability to extract specific features of the image. The experimental results show that, compared with the most popular fundus image classification models, the network model we proposed can greatly improve the prediction accuracy of the network while maintaining computational efficiency.


When pancreas fails to secrete sufficient insulin in the human body, the glucose level in blood either becomes too high or too low. This fluctuation in glucose level affects different body organs such as kidney, brain, and eye. When the complications start appearing in the eyes due to Diabetic Mellitus (DM), it is called Diabetic Retinopathy (DR). DR can be categorized in several classes based on the severity, it can be Microaneurysms (ME), Haemorrhages (HE), Hard and Soft Exudates (EX and SE). DR is a slow start process that starts with very mild symptoms, becomes moderate with the time and results in complete vision loss, if not detected on time. Early-stage detection may greatly bolster in vision loss. However, it is impassable to detect the symptoms of DR with naked eyes. Ophthalmologist harbor to the several approaches and algorithm which makes use of different Machine Learning (ML) methods and classifiers to overcome this disease. The burgeoning insistence of Convolutional Neural Network (CNN) and their advancement in extracting features from different fundus images captivate several researchers to strive on it. Transfer Learning (TL) techniques help to use pre-trained CNN on a dataset that has finite training data, especially that in under developing countries. In this work, we propose several CNN architecture along with distinct classifiers which segregate the different lesions (ME and EX) in DR images with very eye-catching accuracies.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6549
Author(s):  
Roberto Romero-Oraá ◽  
María García ◽  
Javier Oraá-Pérez ◽  
María I. López-Gálvez ◽  
Roberto Hornero

Diabetic retinopathy (DR) is characterized by the presence of red lesions (RLs), such as microaneurysms and hemorrhages, and bright lesions, such as exudates (EXs). Early DR diagnosis is paramount to prevent serious sight damage. Computer-assisted diagnostic systems are based on the detection of those lesions through the analysis of fundus images. In this paper, a novel method is proposed for the automatic detection of RLs and EXs. As the main contribution, the fundus image was decomposed into various layers, including the lesion candidates, the reflective features of the retina, and the choroidal vasculature visible in tigroid retinas. We used a proprietary database containing 564 images, randomly divided into a training set and a test set, and the public database DiaretDB1 to verify the robustness of the algorithm. Lesion detection results were computed per pixel and per image. Using the proprietary database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database, 90.16% ACCi, 96.26% PPV_p, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi, 96.01% PPV_p, and 89.42% SE_p were reached with the proprietary database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. The proposed method could be useful to aid in the diagnosis of DR, reducing the workload of specialists and improving the attention to diabetic patients.


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. A419-A420
Author(s):  
Zack Dvey-Aharon ◽  
Petri Huhtinen

Abstract According to estimations of the World Health Organization (WHO), there are almost 500M people in the world that suffer from diabetes. Projections suggest this number will surpass 700M by 2045 with global prevalence surpassing 7%. This huge population, alongside people with pre-diabetics, is prone to develop diabetic retinopathy, the leading cause of vision loss in the working age. While early screening can help prevent most cases of vision loss caused by diabetic retinopathy, the vast majority of patients are not being screened periodically as the guidelines instruct. The challenge is to find a reliable and convenient method to screen patients so that efficacy in detection of referral diabetic retinopathy is sufficient while integration with the flow of care is smooth, easy, simple, and cost-efficient. In this research, we described a screening process for more-than-mild retinopathy through the application of artificial intelligence (AI) algorithms on images obtained by a portable, handheld fundus camera. 156 patients were screened for mtmDR indication. Four images were taken per patient, two macula centered and two optic disc centered. The 624 images were taken using the Optomed Aurora fundus camera and were uploaded using Optomed Direct-Upload. Fully blinded and independently, a certified, experienced ophthalmologist (contracted by Optomed and based in Finland) reviewed each patient to determine ground truth. Indications that are different than mtmDR were also documented by the ophthalmologist to meet exclusion criteria. Data was obtained from anonymized images uploaded to the cloud-based AEYE-DS system and analysis results from the AI algorithm were promptly returned to the users. Of the 156 patients, a certified ophthalmologist determined 100% reached sufficient quality of images for grading, and 36 had existing retinal diseases that fall under exclusion criteria, thus, 77% of the participants met the participation criteria. Of the remaining 120 patients, the AEYE-DS system determined that 2 patients had at least one insufficient quality image. AEYE-DS provided readings for each of the 118 remaining patients (98.3% of all patients). These were statistically compared to the output of the ground truth arm. The patient ground truth was defined as the most severe diagnosis from the four patient images; the ophthalmologist diagnosed 54 patients as mtmDR+ (45% prevalence). Of the 54 patients with referable DR, 50 were diagnosed and of the 64 mtmDR- patients, 61 were correctly diagnosed by the AI. In summary, the results of the study in terms of sensitivity and specificity were 92.6% and 95.3%, respectively. The results indicated accurate classification of diabetic patients that required referral to the ophthalmologist and those who did not. The results also demonstrated the potential of efficient screening and easy workflow integration into points of care such as endocrinology clinics.


2020 ◽  
Vol 11 (SPL4) ◽  
pp. 503-511
Author(s):  
Rajesh S R ◽  
Kanniga E ◽  
Sundararajan M

Diabetic retinopathy (DR) be the significant difficulty of diabetes, and micro aneurysm (MA) is an earliest diabetic retinopathy lesion, making early detection of MA a key factor in diabetic retinopathy. DR is a direct or indirect effect on human vision caused by chronic diabetes. During its early stages DR is asymptomatic, and the late diagnosis leads to undeviating vision loss. The computer-assisted diagnosis helps with prompt and effective care, with the aid of medical photos. MA mark the beginning of DR making it a vital screening stage for this disorder. Diabetic retinopathy is a persistent infection of eye that can be the reason of blindness unless it is diagnosed and treated in due course. Early discovery with analysis of diabetic retinopathy is vital to vision preservation of patient. Precise recognition of MA be the crucial method towards early diagnosis of DR, since they occur as the first symptom of the disease. The segmentation of MA is performed using the Fuzzy C algorithm, and the extraction of features is performed with Gray Level Co-occurrence Matrix ( GLCM) as the set of characteristic for KNN. This technique aims to improve classification accuracy within an ensemble. A procedure is suggested here that recognizes the first DR sign called MA using images from the retinal fundus. Effective diagnosis of DR is very critical in the defense of patients' right to see. The procedure proposed is tested using publicly available databases of retinal images and greater accuracy is achieved.


Author(s):  
Ryan Sadjadi

Diabetic retinopathy is the most common microvascular complication of diabetes mellitus and one of the leading causes of blindness globally. Due to the progressive nature of the disease, earlier detection and timely treatment can lead to substantial reductions in the incidence of irreversible vision-loss. Artificial intelligence (AI) screening systems have offered clinically acceptable and quicker results in detecting diabetic retinopathy from retinal fundus and optical coherence tomography (OCT) images. Thus, this systematic review and meta-analysis of relevant investigations was performed to document the performance of AI screening systems that were applied to fundus and OCT images of patients from diverse geographic locations including North America, Europe, Africa, Asia, and Australia. A systematic literature search on Medline, Global Health, and PubMed was performed and studies published between October 2015 and January 2020 were included. The search strategy was based on the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) reporting guidelines, and AI-based investigations were mandatory for studies inclusion. The abstracts, titles, and full-texts of potentially eligible studies were screened against inclusion and exclusion criteria. Twenty-one studies were included in this systematic review; 18 met inclusion criteria for the meta-analysis. The pooled sensitivity of the evaluated AI screening systems in detecting diabetic retinopathy was 0.93 (95% CI: 0.92-0.94) and the specificity was 0.88 (95% CI: 0.86-0.89). The included studies detailed training and external validation datasets, criteria for diabetic retinopathy case ascertainment, imaging modalities, DR-grading scales, and compared AI results to those of human graders (e.g., ophthalmologists, retinal specialists, trained nurses, and other healthcare providers) as a reference standard. The findings of this study showed that the majority AI screening systems demonstrated clinically acceptable levels of sensitivity and specificity for detecting referable diabetic retinopathy from retinal fundus and OCT photographs. Further improvement depends on the continual development of novel algorithms with large and gradable sets of images for training and validation. If cost-effectiveness ratios can be optimized, AI can become a financially sustainable and clinically effective intervention that can be incorporated into the healthcare systems of low-to-middle income countries (LMICs) and geographically remote locations. Combining screening technologies with treatment interventions such as anti-VEGF therapy, acellular capillary laser treatment, and vitreoretinal surgery can lead to substantial reductions in the incidence of irreversible vision-loss due to proliferative diabetic retinopathy.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5283 ◽  
Author(s):  
Muhammad Tariq Sadiq ◽  
Xiaojun Yu ◽  
Zhaohui Yuan ◽  
Muhammad Zulkifal Aziz

The development of fast and robust brain–computer interface (BCI) systems requires non-complex and efficient computational tools. The modern procedures adopted for this purpose are complex which limits their use in practical applications. In this study, for the first time, and to the best of our knowledge, a successive decomposition index (SDI)-based feature extraction approach is utilized for the classification of motor and mental imagery electroencephalography (EEG) tasks. First of all, the public datasets IVa, IVb, and V from BCI competition III were denoised using multiscale principal analysis (MSPCA), and then a SDI feature was calculated corresponding to each trial of the data. Finally, six benchmark machine learning and neural network classifiers were used to evaluate the performance of the proposed method. All the experiments were performed for motor and mental imagery datasets in binary and multiclass applications using a 10-fold cross-validation method. Furthermore, computerized automatic detection of motor and mental imagery using SDI (CADMMI-SDI) is developed to describe the proposed approach practically. The experimental results suggest that the highest classification accuracy of 97.46% (Dataset IVa), 99.52% (Dataset IVb), and 99.33% (Dataset V) was obtained using feedforward neural network classifier. Moreover, a series of experiments, namely, statistical analysis, channels variation, classifier parameters variation, processed and unprocessed data, and computational complexity, were performed and it was concluded that SDI is robust for noise, and a non-complex and efficient biomarker for the development of fast and accurate motor and mental imagery BCI systems.


2021 ◽  
Author(s):  
Sunmi ‍Lee ◽  
Yunhwan Kim

BACKGROUND Hashtag movement has become one of the major ways of online movement, but few studies have examined how social media photos were used for the movement. Also, it has not been actively investigated how photo features were related to the public’s responses in hashtag movements. OBJECTIVE The aim of the present research was to explore Instagram photos with #ShoutYourAbortion hashtag, as an example of hashtag movements via photos, in terms of their visual representation and the relationships between photo features and the public’s responses to the photos. METHODS Instagram photos with #ShoutYourAbortion hashtag, 11,176 in total, were downloaded, and their content and embedded texts were analyzed using online artificial intelligence services. The photos were clustered into subgroups based on the features extracted using a pretrained convolutional neural network model. The resulting clusters were compared in terms of their content tags, embedded texts, and photo features which were manually extracted at the content and pixel levels. The public’s responses were measured by engagement and comment sentiment. Correlational analysis and predictive analytics were conducted to examine the relationships between photo features and the public’s responses. RESULTS It was found that the photos in the text category took the largest share (57.19%), and the embedded texts were mainly about stories told in first person point of view as a woman. A possible evidence of hashtag hijacking was observed. The photos were grouped into two clusters; the first cluster comprised photos which exhibit text materials on them, while the second cluster consisted of photos which contain human faces with texts. The photos in the first cluster were brighter, while the photos in the second cluster were more colorful than the others. And public responses were found to be related to photo features such as size of faces, happy emotion, and share of warm colors. Engagement was predicted from the photo features with an acceptable level of accuracy, while comment sentiment was not. CONCLUSIONS This This study has shown the visual representation of #ShoutYourAbortion hashtag movement. It has also shown how photo features at content and pixel levels were related to the public’s responses to the photos. The results are expected to contribute to the understanding of hashtag movements via photos and making photos in hashtag movements more appealing to the public. CLINICALTRIAL Not Applicable


Sign in / Sign up

Export Citation Format

Share Document