Evolutionary Algorithm With Memetic Search Capability for Optic Disc Localization in Retinal Fundus Images

Author(s):  
B. Vinoth Kumar ◽  
G.R. Karpagam ◽  
Yanjun Zhao
2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Muhammad Naseer Bajwa ◽  
Muhammad Imran Malik ◽  
Shoaib Ahmed Siddiqui ◽  
Andreas Dengel ◽  
Faisal Shafait ◽  
...  

Author(s):  
Muhammad Naseer Bajwa ◽  
Muhammad Imran Malik ◽  
Shoaib Ahmed Siddiqui ◽  
Andreas Dengel ◽  
Faisal Shafait ◽  
...  

2020 ◽  
Vol 10 (11) ◽  
pp. 3833 ◽  
Author(s):  
Haidar Almubarak ◽  
Yakoub Bazi ◽  
Naif Alajlan

In this paper, we propose a method for localizing the optic nerve head and segmenting the optic disc/cup in retinal fundus images. The approach is based on a simple two-stage Mask-RCNN compared to sophisticated methods that represent the state-of-the-art in the literature. In the first stage, we detect and crop around the optic nerve head then feed the cropped image as input for the second stage. The second stage network is trained using a weighted loss to produce the final segmentation. To further improve the detection in the first stage, we propose a new fine-tuning strategy by combining the cropping output of the first stage with the original training image to train a new detection network using different scales for the region proposal network anchors. We evaluate the method on Retinal Fundus Images for Glaucoma Analysis (REFUGE), Magrabi, and MESSIDOR datasets. We used the REFUGE training subset to train the models in the proposed method. Our method achieved 0.0430 mean absolute error in the vertical cup-to-disc ratio (MAE vCDR) on the REFUGE test set compared to 0.0414 obtained using complex and multiple ensemble networks methods. The models trained with the proposed method transfer well to datasets outside REFUGE, achieving a MAE vCDR of 0.0785 and 0.077 on MESSIDOR and Magrabi datasets, respectively, without being retrained. In terms of detection accuracy, the proposed new fine-tuning strategy improved the detection rate from 96.7% to 98.04% on MESSIDOR and from 93.6% to 100% on Magrabi datasets compared to the reported detection rates in the literature.


2021 ◽  
Author(s):  
Indu Ilanchezian ◽  
Dmitry Kobak ◽  
Hanna Faber ◽  
Focke Ziemssen ◽  
Philipp Berens ◽  
...  

Deep neural networks (DNNs) are able to predict a person's gender from retinal fundus images with high accuracy, even though this task is usually considered hardly possible by ophthalmologists. Therefore, it has been an open question which features allow reliable discrimination between male and female fundus images. To study this question, we used a particular DNN architecture called BagNet, which extracts local features from small image patches and then averages the class evidence across all patches. The BagNet performed on par with the more sophisticated Inception-v3 model, showing that the gender information can be read out from local features alone. BagNets also naturally provide saliency maps, which we used to highlight the most informative patches in fundus images. We found that most evidence was provided by patches from the optic disc and the macula, with patches from the optic disc providing mostly male and patches from the macula providing mostly female evidence. Although further research is needed to clarify the exact nature of this evidence, our results suggest that there are localized structural differences in fundus images between genders. Overall, we believe that BagNets may provide a compelling alternative to the standard DNN architectures also in other medical image analysis tasks, as they do not require post-hoc explainability methods.


Sign in / Sign up

Export Citation Format

Share Document