fundus images
Recently Published Documents





Prakruthi Mandya Krishnegowda ◽  
Komarasamy Ganesan

<p>Diabetic retinopathy (DR) refers to a complication of diabetes and a prime cause of vision loss in middle-aged people. A timely screening and diagnosis process can reduce the risk of blindness. Fundus imaging is mainly preferred in the clinical analysis of DR. However; the raw fundus images are usually subjected to artifacts, noise, low and varied contrast, which is very hard to process by human visual systems and automated systems. In the existing literature, many solutions are given to enhance the fundus image. However, such approaches are particular and limited to a specific objective that cannot address multiple fundus images. This paper has presented an on-demand preprocessing frame work that integrates different techniques to address geometrical issues, random noises, and comprehensive contrast enhancement solutions. The performance of each preprocessing process is evaluated against peak signal-to-noise ratio (PSNR), and brightness is quantified in the enhanced image. The motive of this paper is to offer a flexible approach of preprocessing mechanism that can meet image enhancement needs based on different preprocessing requirements to improve the quality of fundus imaging towards early-stage diabetic retinopathy identification.</p>

2022 ◽  
Vol 3 (1) ◽  
pp. 1-15
Divya Jyothi Gaddipati ◽  
Jayanthi Sivaswamy

Early detection and treatment of glaucoma is of interest as it is a chronic eye disease leading to an irreversible loss of vision. Existing automated systems rely largely on fundus images for assessment of glaucoma due to their fast acquisition and cost-effectiveness. Optical Coherence Tomographic ( OCT ) images provide vital and unambiguous information about nerve fiber loss and optic cup morphology, which are essential for disease assessment. However, the high cost of OCT is a deterrent for deployment in screening at large scale. In this article, we present a novel CAD solution wherein both OCT and fundus modality images are leveraged to learn a model that can perform a mapping of fundus to OCT feature space. We show how this model can be subsequently used to detect glaucoma given an image from only one modality (fundus). The proposed model has been validated extensively on four public andtwo private datasets. It attained an AUC/Sensitivity value of 0.9429/0.9044 on a diverse set of 568 images, which is superior to the figures obtained by a model that is trained only on fundus features. Cross-validation was also done on nearly 1,600 images drawn from a private (OD-centric) and a public (macula-centric) dataset and the proposed model was found to outperform the state-of-the-art method by 8% (public) to 18% (private). Thus, we conclude that fundus to OCT feature space mapping is an attractive option for glaucoma detection.

2022 ◽  
Zhuoting Zhu ◽  
Yifan Chen ◽  
Wei Wang ◽  
Yueye Wang ◽  
Wenyi Hu ◽  

Background: Retinal parameters could reflect systemic vascular changes. With the advances of deep learning technology, we have recently developed an algorithm to predict retinal age based on fundus images, which could be a novel biomarker for ageing and mortality. Objective: To investigate associations of retinal age gap with arterial stiffness index (ASI) and incident cardiovascular disease (CVD). Methods: A deep learning (DL) model was trained based on 19,200 fundus images of 11,052 participants without any past medical history at baseline to predict the retinal age. Retinal age gap (retinal age predicted minus chronological age) was generated for the remaining 35,917 participants. Regression models were used to assess the association between retinal age gap and ASI. Cox proportional hazards regression models and restricted cubic splines were used to explore the association between retinal age gap and incident CVD. Results: We found each one-year increase in retinal age gap was associated with increased ASI (β=0.002, 95% confidence interval [CI]: 0.001-0.003, P<0.001). After a median follow-up of 5.83 years (interquartile range [IQR]: 5.73-5.97), 675 (2.00%) developed CVD. In the fully adjusted model, each one-year increase in retinal age gap was associated with a 3% increase in the risk of incident CVD (hazard ratio [HR]=1.03, 95% CI: 1.01-1.06, P=0.012). In the restricted cubic splines analysis, the risk of incident CVD increased significantly when retinal age gap reached 1.21 (HR=1.05; 95% CI, 1.00-1.10; P-overall <0.0001; P-nonlinear=0.0681). Conclusion: We found that retinal age gap was significantly associated with ASI and incident CVD events, supporting the potential of this novel biomarker in identifying individuals at high risk of future CVD events.

2022 ◽  
Vol 15 (2) ◽  
pp. 027001
Yang Cui ◽  
Taiki Takamatsu ◽  
Koichi Shimizu ◽  
Takeo Miyake

Abstract As for the diagnosis and treatment of eye diseases, an ideal fundus imaging system is expected to be portability, low cost, and high resolution. Here, we demonstrate a non-mydriatic near-infrared fundus imaging system with light illumination from an electronic contact lens (E-lens). The E-lens can illuminate the retinal and choroidal structures for capturing the fundus images when voltage is applied wirelessly to the lens. And we also reconstruct the images with a depth-dependent point-spread function to suppress the scattering effect that eventually visualizes the clear fundus images.

Abdelali Elmoufidi ◽  
Ayoub Skouta ◽  
Said Jai-Andaloussi ◽  
Ouail Ouchetto

In the area of ophthalmology, glaucoma affects an increasing number of people. It is a major cause of blindness. Early detection avoids severe ocular complications such as glaucoma, cystoid macular edema, or diabetic proliferative retinopathy. Intelligent artificial intelligence has been confirmed beneficial for glaucoma assessment. In this paper, we describe an approach to automate glaucoma diagnosis using funds images. The setup of the proposed framework is in order: The Bi-dimensional Empirical Mode Decomposition (BEMD) algorithm is applied to decompose the Regions of Interest (ROI) to components (BIMFs+residue). CNN architecture VGG19 is implemented to extract features from decomposed BEMD components. Then, we fuse the features of the same ROI in a bag of features. These last very long; therefore, Principal Component Analysis (PCA) are used to reduce features dimensions. The bags of features obtained are the input parameters of the implemented classifier based on the Support Vector Machine (SVM). To train the built models, we have used two public datasets, which are ACRIMA and REFUGE. For testing our models, we have used a part of ACRIMA and REFUGE plus four other public datasets, which are RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF. The overall precision of 98.31%, 98.61%, 96.43%, 96.67%, 95.24%, and 98.60% is obtained on ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, by using the model trained on REFUGE. Again an accuracy of 98.92%, 99.06%, 98.27%, 97.10%, 96.97%, and 96.36% is obtained in the ACRIMA, REFUGE, RIM-ONE, ORIGA-light, Drishti-GS1, and sjchoi86-HRF datasets, respectively, using the model training on ACRIMA. The experimental results obtained from different datasets demonstrate the efficiency and robustness of the proposed approach. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.

Sign in / Sign up

Export Citation Format

Share Document