scholarly journals A robust algorithm for optic disc segmentation and fovea detection in retinal fundus images

2017 ◽  
Vol 3 (2) ◽  
pp. 533-537 ◽  
Author(s):  
Caterina Rust ◽  
Stephanie Häger ◽  
Nadine Traulsen ◽  
Jan Modersitzki

AbstractAccurate optic disc (OD) segmentation and fovea detection in retinal fundus images are crucial for diagnosis in ophthalmology. We propose a robust and broadly applicable algorithm for automated, robust, reliable and consistent fovea detection based on OD segmentation. The OD segmentation is performed with morphological operations and Fuzzy C Means Clustering combined with iterative thresholding on a foreground segmentation. The fovea detection is based on a vessel segmentation via morphological operations and uses the resulting OD segmentation to determine multiple regions of interest. The fovea is determined from the largest, vessel-free candidate region. We have tested the novel method on a total of 190 images from three publicly available databases DRIONS, Drive and HRF. Compared to results of two human experts for DRIONS database, our OD segmentation yielded a dice coefficient of 0.83. Note that missing ground truth and expert variability is an issue. The new scheme achieved an overall success rate of 99.44% for OD detection and an overall success rate of 96.25% for fovea detection, which is superior to state-of-the-art approaches.

2020 ◽  
Vol 10 (11) ◽  
pp. 3833 ◽  
Author(s):  
Haidar Almubarak ◽  
Yakoub Bazi ◽  
Naif Alajlan

In this paper, we propose a method for localizing the optic nerve head and segmenting the optic disc/cup in retinal fundus images. The approach is based on a simple two-stage Mask-RCNN compared to sophisticated methods that represent the state-of-the-art in the literature. In the first stage, we detect and crop around the optic nerve head then feed the cropped image as input for the second stage. The second stage network is trained using a weighted loss to produce the final segmentation. To further improve the detection in the first stage, we propose a new fine-tuning strategy by combining the cropping output of the first stage with the original training image to train a new detection network using different scales for the region proposal network anchors. We evaluate the method on Retinal Fundus Images for Glaucoma Analysis (REFUGE), Magrabi, and MESSIDOR datasets. We used the REFUGE training subset to train the models in the proposed method. Our method achieved 0.0430 mean absolute error in the vertical cup-to-disc ratio (MAE vCDR) on the REFUGE test set compared to 0.0414 obtained using complex and multiple ensemble networks methods. The models trained with the proposed method transfer well to datasets outside REFUGE, achieving a MAE vCDR of 0.0785 and 0.077 on MESSIDOR and Magrabi datasets, respectively, without being retrained. In terms of detection accuracy, the proposed new fine-tuning strategy improved the detection rate from 96.7% to 98.04% on MESSIDOR and from 93.6% to 100% on Magrabi datasets compared to the reported detection rates in the literature.


Author(s):  
D. N. H. Thanh ◽  
D. Sergey ◽  
V. B. Surya Prasath ◽  
N. H. Hai

<p><strong>Abstract.</strong> Diabetes is a common disease in the modern life. According to WHO’s data, in 2018, there were 8.3% of adult population had diabetes. Many countries over the world have spent a lot of finance, force to treat this disease. One of the most dangerous complications that diabetes can cause is the blood vessel lesion. It can happen on organs, limbs, eyes, etc. In this paper, we propose an adaptive principal curvature and three blood vessels segmentation methods for retinal fundus images based on the adaptive principal curvature and images derivatives: the central difference, the Sobel operator and the Prewitt operator. These methods are useful to assess the lesion level of blood vessels of eyes to let doctors specify the suitable treatment regimen. It also can be extended to apply for the blood vessels segmentation of other organs, other parts of a human body. In experiments, we handle proposed methods and compare their segmentation results based on a dataset – DRIVE. Segmentation quality assessments are computed on the Sorensen-Dice similarity, the Jaccard similarity and the contour matching score with the given ground truth that were segmented manually by a human.</p>


2021 ◽  
Author(s):  
Indu Ilanchezian ◽  
Dmitry Kobak ◽  
Hanna Faber ◽  
Focke Ziemssen ◽  
Philipp Berens ◽  
...  

Deep neural networks (DNNs) are able to predict a person's gender from retinal fundus images with high accuracy, even though this task is usually considered hardly possible by ophthalmologists. Therefore, it has been an open question which features allow reliable discrimination between male and female fundus images. To study this question, we used a particular DNN architecture called BagNet, which extracts local features from small image patches and then averages the class evidence across all patches. The BagNet performed on par with the more sophisticated Inception-v3 model, showing that the gender information can be read out from local features alone. BagNets also naturally provide saliency maps, which we used to highlight the most informative patches in fundus images. We found that most evidence was provided by patches from the optic disc and the macula, with patches from the optic disc providing mostly male and patches from the macula providing mostly female evidence. Although further research is needed to clarify the exact nature of this evidence, our results suggest that there are localized structural differences in fundus images between genders. Overall, we believe that BagNets may provide a compelling alternative to the standard DNN architectures also in other medical image analysis tasks, as they do not require post-hoc explainability methods.


Sign in / Sign up

Export Citation Format

Share Document