Evaluation of a New Neural Network Classifier for Diabetic Retinopathy

2021 ◽  
pp. 193229682110426
Author(s):  
Or Katz ◽  
Dan Presil ◽  
Liz Cohen ◽  
Roi Nachmani ◽  
Naomi Kirshner ◽  
...  

Background: Medical image segmentation is a well-studied subject within the field of image processing. The goal of this research is to create an AI retinal screening grading system that is both accurate and fast. We introduce a new segmentation network which achieves state-of-the-art results on semantic segmentation of color fundus photographs. By applying the net-work to identify anatomical markers of diabetic retinopathy (DR) and diabetic macular edema (DME), we collect sufficient information to classify patients by grades R0 and R1 or above, M0 and M1. Methods: The AI grading system was trained on screening data to evaluate the presence of DR and DME. The core algorithm of the system is a deep learning network that segments relevant anatomical features in a retinal image. Patients were graded according to the standard NHS Diabetic Eye Screening Program feature-based grading protocol. Results: The algorithm performance was evaluated with a series of 6,981 patient retinal images from routine diabetic eye screenings. It correctly predicted 98.9% of retinopathy events and 95.5% of maculopathy events. Non-disease events prediction rate was 68.6% for retinopathy and 81.2% for maculopathy. Conclusion: This novel deep learning model was trained and tested on patient data from annual diabetic retinopathy screenings can classify with high accuracy the DR and DME status of a person with diabetes. The system can be easily reconfigured according to any grading protocol, without running a long AI training procedure. The incorporation of the AI grading system can increase the graders’ productivity and improve the final outcome accuracy of the screening process.

2020 ◽  
Vol 45 (12) ◽  
pp. 1550-1555
Author(s):  
Xiang-Ning Wang ◽  
Ling Dai ◽  
Shu-Ting Li ◽  
Hong-Yu Kong ◽  
Bin Sheng ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Dominik Müller ◽  
Frank Kramer

Abstract Background The increased availability and usage of modern medical imaging induced a strong need for automatic medical image segmentation. Still, current image segmentation platforms do not provide the required functionalities for plain setup of medical image segmentation pipelines. Already implemented pipelines are commonly standalone software, optimized on a specific public data set. Therefore, this paper introduces the open-source Python library MIScnn. Implementation The aim of MIScnn is to provide an intuitive API allowing fast building of medical image segmentation pipelines including data I/O, preprocessing, data augmentation, patch-wise analysis, metrics, a library with state-of-the-art deep learning models and model utilization like training, prediction, as well as fully automatic evaluation (e.g. cross-validation). Similarly, high configurability and multiple open interfaces allow full pipeline customization. Results Running a cross-validation with MIScnn on the Kidney Tumor Segmentation Challenge 2019 data set (multi-class semantic segmentation with 300 CT scans) resulted into a powerful predictor based on the standard 3D U-Net model. Conclusions With this experiment, we could show that the MIScnn framework enables researchers to rapidly set up a complete medical image segmentation pipeline by using just a few lines of code. The source code for MIScnn is available in the Git repository: https://github.com/frankkramer-lab/MIScnn.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Paisan Ruamviboonsuk ◽  
Jonathan Krause ◽  
Peranut Chotcomwongse ◽  
Rory Sayres ◽  
Rajiv Raman ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2215
Author(s):  
Athanasios Voulodimos ◽  
Eftychios Protopapadakis ◽  
Iason Katsamenis ◽  
Anastasios Doulamis ◽  
Nikolaos Doulamis

Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (p-value = 0.026).


2017 ◽  
Vol 1 (4) ◽  
pp. 69-85
Author(s):  
Evangelia Kotsiliti ◽  
Bashir Al-Diri ◽  
Andrew Hunter

  Purpose: In the United Kingdom (UK), The NHS Diabetic Eye Screening Program offers an annual eye examination to all people with diabetes aged 12 or over, aiming at the early detection of people at high risk of visual loss due to diabetic retinopathy. The purpose of this study was the design of a model to predict patients at risk of developing retinopathy with the use of patient characteristics and clinical measures. Methods: We investigated data from 2011 to 2016 from the population-based Diabetic Eye Screening Program in East Anglia. The data comprised retinal eye screening results, patient characteristics, and routine biochemical measures of HbA1c, blood pressure, Albumin to Creatinine ratio (ACR), estimated Glomerular Filtration rate (eGFR), serum creatinine, cholesterol and Body Mass Index (BMI). Individuals were classified according to the presence or absence of retinopathy as indicated by their retinal eye examinations. A lasso regression, random forest, gradient boosting machine and regularized gradient boosting model were built and cross-validated for their predictive ability.  Results: A total of 6,375 subjects with recorded information for all available biochemical measures were identified from the cohorts. Of these, 5,969 individuals had no signs of diabetic retinopathy. Of the remainder 406 individuals with signs of diabetic retinopathy, 352 had background diabetic retinopathy and 54 had referable diabetic retinopathy. The highest value of the10-fold cross-validated Area under the Curve (AUC) was achieved by the gradient boosting machine 0.73 ± 0.03 and the minimum required set of variables to yield this performance included 4 variables: duration of diabetes, HbA1c, ACR and age. A subsequent analysis on the predictive power of the biochemical measures showed that when HbA1c and ACR measurements were available for longer time periods, the performance of the models was greatly enhanced. When HbA1c and ACR measurements for a 5-year period prior to the event of study were available, gradient boosting machine cross-validated AUC was 0.77 ± 0.04 in comparison to the cross-validated AUC of 0.68 ± 0.04 when only information for the 1-year period for these variables was available. Similarly, an increment from 0.70 ± 0.02 to 0.75 ± 0.04 was observed with random forest. The dataset with the 1-year measurements comprised 4,857 subjects, of whom, 4,572 had no retinopathy and the remainder 285 had signs of retinopathy. The dataset with the 5-year measurements comprised 757 subjects, of whom, 696 had no retinopathy and the remainder 51 had signs of retinopathy. Conclusions: The utilization of patient information and routine biochemical measures can be used to identify patients at risk of developing retinopathy. The effective differentiation between patients with and without retinopathy could significantly reduce the number of screening visits without compromising patients’ health. 


2020 ◽  
Vol 45 (4) ◽  
pp. 1-8
Author(s):  
Doaa Elsawah ◽  
Ahmed Elnakib ◽  
Hossam El-Din Moustafa

2021 ◽  
Author(s):  
Mohammed Yousef Salem Ali ◽  
Mohamed Abdel-Nasser ◽  
Mohammed Jabreel ◽  
Aida Valls ◽  
Marc Baget

The optic disc (OD) is the point where the retinal vessels begin. OD carries essential information linked to Diabetic Retinopathy and glaucoma that may cause vision loss. Therefore, accurate segmentation of the optic disc from eye fundus images is essential to develop efficient automated DR and glaucoma detection systems. This paper presents a deep learning-based system for OD segmentation based on an ensemble of efficient semantic segmentation models for medical image segmentation. The aggregation of the different DL models was performed with the ordered weighted averaging (OWA) operators. We proposed the use of a dynamically generated set of weights that can give a different contribution to the models according to their performance during the segmentation of OD in the eye fundus images. The effectiveness of the proposed system was assessed on a fundus image dataset collected from the Hospital Sant Joan de Reus. We obtained Jaccard, Dice, Precision, and Recall scores of 95.40, 95.10, 96.70, and 93.90%, respectively.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0247388
Author(s):  
Jingfei Hu ◽  
Hua Wang ◽  
Jie Wang ◽  
Yunqi Wang ◽  
Fang He ◽  
...  

Semantic segmentation of medical images provides an important cornerstone for subsequent tasks of image analysis and understanding. With rapid advancements in deep learning methods, conventional U-Net segmentation networks have been applied in many fields. Based on exploratory experiments, features at multiple scales have been found to be of great importance for the segmentation of medical images. In this paper, we propose a scale-attention deep learning network (SA-Net), which extracts features of different scales in a residual module and uses an attention module to enforce the scale-attention capability. SA-Net can better learn the multi-scale features and achieve more accurate segmentation for different medical image. In addition, this work validates the proposed method across multiple datasets. The experiment results show SA-Net achieves excellent performances in the applications of vessel detection in retinal images, lung segmentation, artery/vein(A/V) classification in retinal images and blastocyst segmentation. To facilitate SA-Net utilization by the scientific community, the code implementation will be made publicly available.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Paisan Ruamviboonsuk ◽  
Jonathan Krause ◽  
Peranut Chotcomwongse ◽  
Rory Sayres ◽  
Rajiv Raman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document