scholarly journals Weakly-Supervised Simultaneous Evidence Identification and Segmentation for Automated Glaucoma Diagnosis

Author(s):  
Rongchang Zhao ◽  
Wangmin Liao ◽  
Beiji Zou ◽  
Zailiang Chen ◽  
Shuo Li

Evidence identification, optic disc segmentation and automated glaucoma diagnosis are the most clinically significant tasks for clinicians to assess fundus images. However, delivering the three tasks simultaneously is extremely challenging due to the high variability of fundus structure and lack of datasets with complete annotations. In this paper, we propose an innovative Weakly-Supervised Multi-Task Learning method (WSMTL) for accurate evidence identification, optic disc segmentation and automated glaucoma diagnosis. The WSMTL method only uses weak-label data with binary diagnostic labels (normal/glaucoma) for training, while obtains pixel-level segmentation mask and diagnosis for testing. The WSMTL is constituted by a skip and densely connected CNN to capture multi-scale discriminative representation of fundus structure; a well-designed pyramid integration structure to generate high-resolution evidence map for evidence identification, in which the pixels with higher value represent higher confidence to highlight the abnormalities; a constrained clustering branch for optic disc segmentation; and a fully-connected discriminator for automated glaucoma diagnosis. Experimental results show that our proposed WSMTL effectively and simultaneously delivers evidence identification, optic disc segmentation (89.6% TP Dice), and accurate glaucoma diagnosis (92.4% AUC). This endows our WSMTL a great potential for the effective clinical assessment of glaucoma.

2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Vijay M Mane

An automatic Optic disc and Optic cup detection technique which is an important step in developing systems for computer-aided eye disease diagnosis is presented in this paper. This paper presents an algorithm for localization and segmentation of optic disc from digital retinal images. OD localization is achieved by circular Hough transform using morphological preprocessing and segmentation is achieved by watershed transformation. Optic cup segmentation is achieved by marker controlled watershed transformation. The optic disc to cup ratio (CDR) is calculated which is an important parameter for glaucoma diagnosis. The presented algorithm is evaluated against publically available DRIVE dataset. The presented methodology achieved 88% average sensitivity and 80% average overlap. The average CDR detected is 0.1983.


Author(s):  
Guangmin Sun ◽  
Zhongxiang Zhang ◽  
Junjie Zhang ◽  
Meilong Zhu ◽  
Xiao-rong Zhu ◽  
...  

AbstractAutomatic segmentation of optic disc (OD) and optic cup (OC) is an essential task for analysing colour fundus images. In clinical practice, accurate OD and OC segmentation assist ophthalmologists in diagnosing glaucoma. In this paper, we propose a unified convolutional neural network, named ResFPN-Net, which learns the boundary feature and the inner relation between OD and OC for automatic segmentation. The proposed ResFPN-Net is mainly composed of multi-scale feature extractor, multi-scale segmentation transition and attention pyramid architecture. The multi-scale feature extractor achieved the feature encoding of fundus images and captured the boundary representations. The multi-scale segmentation transition is employed to retain the features of different scales. Moreover, an attention pyramid architecture is proposed to learn rich representations and the mutual connection in the OD and OC. To verify the effectiveness of the proposed method, we conducted extensive experiments on two public datasets. On the Drishti-GS database, we achieved a Dice coefficient of 97.59%, 89.87%, the accuracy of 99.21%, 98.77%, and the Averaged Hausdorff distance of 0.099, 0.882 on the OD and OC segmentation, respectively. We achieved a Dice coefficient of 96.41%, 83.91%, the accuracy of 99.30%, 99.24%, and the Averaged Hausdorff distance of 0.166, 1.210 on the RIM-ONE database for OD and OC segmentation, respectively. Comprehensive results show that the proposed method outperforms other competitive OD and OC segmentation methods and appears more adaptable in cross-dataset scenarios. The introduced multi-scale loss function achieved significantly lower training loss and higher accuracy compared with other loss functions. Furthermore, the proposed method is further validated in OC to OD ratio calculation task and achieved the best MAE of 0.0499 and 0.0630 on the Drishti-GS and RIM-ONE datasets, respectively. Finally, we evaluated the effectiveness of the glaucoma screening on Drishti-GS and RIM-ONE datasets, achieving the AUC of 0.8947 and 0.7964. These results proved that the proposed ResFPN-Net is effective in analysing fundus images for glaucoma screening and can be applied in other relative biomedical image segmentation applications.


2019 ◽  
Vol 9 (15) ◽  
pp. 3064 ◽  
Author(s):  
Mijung Kim ◽  
Jong Chul Han ◽  
Seung Hyup Hyun ◽  
Olivier Janssens ◽  
Sofie Van Hoecke ◽  
...  

Glaucoma is a leading eye disease, causing vision loss by gradually affecting peripheral vision if left untreated. Current diagnosis of glaucoma is performed by ophthalmologists, human experts who typically need to analyze different types of medical images generated by different types of medical equipment: fundus, Retinal Nerve Fiber Layer (RNFL), Optical Coherence Tomography (OCT) disc, OCT macula, perimetry, and/or perimetry deviation. Capturing and analyzing these medical images is labor intensive and time consuming. In this paper, we present a novel approach for glaucoma diagnosis and localization, only relying on fundus images that are analyzed by making use of state-of-the-art deep learning techniques. Specifically, our approach towards glaucoma diagnosis and localization leverages Convolutional Neural Networks (CNNs) and Gradient-weighted Class Activation Mapping (Grad-CAM), respectively. We built and evaluated different predictive models using a large set of fundus images, collected and labeled by ophthalmologists at Samsung Medical Center (SMC). Our experimental results demonstrate that our most effective predictive model is able to achieve a high diagnosis accuracy of 96%, as well as a high sensitivity of 96% and a high specificity of 100% for Dataset-Optic Disc (OD), a set of center-cropped fundus images highlighting the optic disc. Furthermore, we present Medinoid, a publicly-available prototype web application for computer-aided diagnosis and localization of glaucoma, integrating our most effective predictive model in its back-end.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Guangzhou An ◽  
Kazuko Omodaka ◽  
Kazuki Hashimoto ◽  
Satoru Tsuda ◽  
Yukihiro Shiga ◽  
...  

This study aimed to develop a machine learning-based algorithm for glaucoma diagnosis in patients with open-angle glaucoma, based on three-dimensional optical coherence tomography (OCT) data and color fundus images. In this study, 208 glaucomatous and 149 healthy eyes were enrolled, and color fundus images and volumetric OCT data from the optic disc and macular area of these eyes were captured with a spectral-domain OCT (3D OCT-2000, Topcon). Thickness and deviation maps were created with a segmentation algorithm. Transfer learning of convolutional neural network (CNN) was used with the following types of input images: (1) fundus image of optic disc in grayscale format, (2) disc retinal nerve fiber layer (RNFL) thickness map, (3) macular ganglion cell complex (GCC) thickness map, (4) disc RNFL deviation map, and (5) macular GCC deviation map. Data augmentation and dropout were performed to train the CNN. For combining the results from each CNN model, a random forest (RF) was trained to classify the disc fundus images of healthy and glaucomatous eyes using feature vector representation of each input image, removing the second fully connected layer. The area under receiver operating characteristic curve (AUC) of a 10-fold cross validation (CV) was used to evaluate the models. The 10-fold CV AUCs of the CNNs were 0.940 for color fundus images, 0.942 for RNFL thickness maps, 0.944 for macular GCC thickness maps, 0.949 for disc RNFL deviation maps, and 0.952 for macular GCC deviation maps. The RF combining the five separate CNN models improved the 10-fold CV AUC to 0.963. Therefore, the machine learning system described here can accurately differentiate between healthy and glaucomatous subjects based on their extracted images from OCT data and color fundus images. This system should help to improve the diagnostic accuracy in glaucoma.


2020 ◽  
Vol 10 (11) ◽  
pp. 3833 ◽  
Author(s):  
Haidar Almubarak ◽  
Yakoub Bazi ◽  
Naif Alajlan

In this paper, we propose a method for localizing the optic nerve head and segmenting the optic disc/cup in retinal fundus images. The approach is based on a simple two-stage Mask-RCNN compared to sophisticated methods that represent the state-of-the-art in the literature. In the first stage, we detect and crop around the optic nerve head then feed the cropped image as input for the second stage. The second stage network is trained using a weighted loss to produce the final segmentation. To further improve the detection in the first stage, we propose a new fine-tuning strategy by combining the cropping output of the first stage with the original training image to train a new detection network using different scales for the region proposal network anchors. We evaluate the method on Retinal Fundus Images for Glaucoma Analysis (REFUGE), Magrabi, and MESSIDOR datasets. We used the REFUGE training subset to train the models in the proposed method. Our method achieved 0.0430 mean absolute error in the vertical cup-to-disc ratio (MAE vCDR) on the REFUGE test set compared to 0.0414 obtained using complex and multiple ensemble networks methods. The models trained with the proposed method transfer well to datasets outside REFUGE, achieving a MAE vCDR of 0.0785 and 0.077 on MESSIDOR and Magrabi datasets, respectively, without being retrained. In terms of detection accuracy, the proposed new fine-tuning strategy improved the detection rate from 96.7% to 98.04% on MESSIDOR and from 93.6% to 100% on Magrabi datasets compared to the reported detection rates in the literature.


Sign in / Sign up

Export Citation Format

Share Document