scholarly journals Screening Referable Diabetic Retinopathy Using a Semi-automated Deep Learning Algorithm Assisted Approach

2021 ◽  
Vol 8 ◽  
Author(s):  
Yueye Wang ◽  
Danli Shi ◽  
Zachary Tan ◽  
Yong Niu ◽  
Yu Jiang ◽  
...  

Purpose: To assess the accuracy and efficacy of a semi-automated deep learning algorithm (DLA) assisted approach to detect vision-threatening diabetic retinopathy (DR).Methods: We developed a two-step semi-automated DLA-assisted approach to grade fundus photographs for vision-threatening referable DR. Study images were obtained from the Lingtou Cohort Study, and captured at participant enrollment in 2009–2010 (“baseline images”) and annual follow-up between 2011 and 2017. To begin, a validated DLA automatically graded baseline images for referable DR and classified them as positive, negative, or ungradable. Following, each positive image, all other available images from patients who had a positive image, and a 5% random sample of all negative images were selected and regraded by trained human graders. A reference standard diagnosis was assigned once all graders achieved consistent grading outcomes or with a senior ophthalmologist's final diagnosis. The semi-automated DLA assisted approach combined initial DLA screening and subsequent human grading for images identified as high-risk. This approach was further validated within the follow-up image datasets and its time and economic costs evaluated against fully human grading.Results: For evaluation of baseline images, a total of 33,115 images were included and automatically graded by the DLA. 2,604 images (480 positive results, 624 available other images from participants with a positive result, and 1500 random negative samples) were selected and regraded by graders. The DLA achieved an area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy of 0.953, 0.970, 0.879, and 88.6%, respectively. In further validation within the follow-up image datasets, a total of 88,363 images were graded using this semi-automated approach and human grading was performed on 8975 selected images. The DLA achieved an AUC, sensitivity, and specificity of 0.914, 0.852, 0.853, respectively. Compared against fully human grading, the semi-automated DLA-assisted approach achieved an estimated 75.6% time and 90.1% economic cost saving.Conclusions: The DLA described in this study was able to achieve high accuracy, sensitivity, and specificity in grading fundus images for referable DR. Validated against long-term follow-up datasets, a semi-automated DLA-assisted approach was able to accurately identify suspect cases, and minimize misdiagnosis whilst balancing safety, time, and economic cost.

Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Aan Chu ◽  
David Squirrell ◽  
Andelka M. Phillips ◽  
Ehsan Vaghefi

This systematic review was performed to identify the specifics of an optimal diabetic retinopathy deep learning algorithm, by identifying the best exemplar research studies of the field, whilst highlighting potential barriers to clinical implementation of such an algorithm. Searching five electronic databases (Embase, MEDLINE, Scopus, PubMed, and the Cochrane Library) returned 747 unique records on 20 December 2019. Predetermined inclusion and exclusion criteria were applied to the search results, resulting in 15 highest-quality publications. A manual search through the reference lists of relevant review articles found from the database search was conducted, yielding no additional records. A validation dataset of the trained deep learning algorithms was used for creating a set of optimal properties for an ideal diabetic retinopathy classification algorithm. Potential limitations to the clinical implementation of such systems were identified as lack of generalizability, limited screening scope, and data sovereignty issues. It is concluded that deep learning algorithms in the context of diabetic retinopathy screening have reported impressive results. Despite this, the potential sources of limitations in such systems must be evaluated carefully. An ideal deep learning algorithm should be clinic-, clinician-, and camera-agnostic; complying with the local regulation for data sovereignty, storage, privacy, and reporting; whilst requiring minimum human input.


2021 ◽  
Vol 8 (3) ◽  
pp. 619
Author(s):  
Candra Dewi ◽  
Andri Santoso ◽  
Indriati Indriati ◽  
Nadia Artha Dewi ◽  
Yoke Kusuma Arbawa

<p>Semakin meningkatnya jumlah penderita diabetes menjadi salah satu faktor penyebab semakin tingginya penderita penyakit <em>diabetic retinophaty</em>. Salah satu citra yang digunakan oleh dokter mata untuk mengidentifikasi <em>diabetic retinophaty</em> adalah foto retina. Dalam penelitian ini dilakukan pengenalan penyakit diabetic retinophaty secara otomatis menggunakan citra <em>fundus</em> retina dan algoritme <em>Convolutional Neural Network</em> (CNN) yang merupakan variasi dari algoritme Deep Learning. Kendala yang ditemukan dalam proses pengenalan adalah warna retina yang cenderung merah kekuningan sehingga ruang warna RGB tidak menghasilkan akurasi yang optimal. Oleh karena itu, dalam penelitian ini dilakukan pengujian pada berbagai ruang warna untuk mendapatkan hasil yang lebih baik. Dari hasil uji coba menggunakan 1000 data pada ruang warna RGB, HSI, YUV dan L*a*b* memberikan hasil yang kurang optimal pada data seimbang dimana akurasi terbaik masih dibawah 50%. Namun pada data tidak seimbang menghasilkan akurasi yang cukup tinggi yaitu 83,53% pada ruang warna YUV dengan pengujian pada data latih dan akurasi 74,40% dengan data uji pada semua ruang warna.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Increasing the number of people with diabetes is one of the factors causing the high number of people with diabetic retinopathy. One of the images used by ophthalmologists to identify diabetic retinopathy is a retinal photo. In this research, the identification of diabetic retinopathy is done automatically using retinal fundus images and the Convolutional Neural Network (CNN) algorithm, which is a variation of the Deep Learning algorithm. The obstacle found in the recognition process is the color of the retina which tends to be yellowish red so that the RGB color space does not produce optimal accuracy. Therefore, in this research, various color spaces were tested to get better results. From the results of trials using 1000 images data in the color space of RGB, HSI, YUV and L * a * b * give suboptimal results on balanced data where the best accuracy is still below 50%. However, the unbalanced data gives a fairly high accuracy of 83.53% with training data on the YUV color space and 74,40% with testing data on all color spaces.</em></p><p><em><strong><br /></strong></em></p>


Ophthalmology ◽  
2019 ◽  
Vol 126 (4) ◽  
pp. 552-564 ◽  
Author(s):  
Rory Sayres ◽  
Ankur Taly ◽  
Ehsan Rahimy ◽  
Katy Blumer ◽  
David Coz ◽  
...  

2020 ◽  
Vol 4 (12) ◽  
pp. 1197-1207
Author(s):  
Wanshan Ning ◽  
Shijun Lei ◽  
Jingjing Yang ◽  
Yukun Cao ◽  
Peiran Jiang ◽  
...  

AbstractData from patients with coronavirus disease 2019 (COVID-19) are essential for guiding clinical decision making, for furthering the understanding of this viral disease, and for diagnostic modelling. Here, we describe an open resource containing data from 1,521 patients with pneumonia (including COVID-19 pneumonia) consisting of chest computed tomography (CT) images, 130 clinical features (from a range of biochemical and cellular analyses of blood and urine samples) and laboratory-confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) clinical status. We show the utility of the database for prediction of COVID-19 morbidity and mortality outcomes using a deep learning algorithm trained with data from 1,170 patients and 19,685 manually labelled CT slices. In an independent validation cohort of 351 patients, the algorithm discriminated between negative, mild and severe cases with areas under the receiver operating characteristic curve of 0.944, 0.860 and 0.884, respectively. The open database may have further uses in the diagnosis and management of patients with COVID-19.


Author(s):  
Sarah Eskreis-Winkler ◽  
Natsuko Onishi ◽  
Katja Pinker ◽  
Jeffrey S Reiner ◽  
Jennifer Kaplan ◽  
...  

Abstract Objective To investigate the feasibility of using deep learning to identify tumor-containing axial slices on breast MRI images. Methods This IRB–approved retrospective study included consecutive patients with operable invasive breast cancer undergoing pretreatment breast MRI between January 1, 2014, and December 31, 2017. Axial tumor-containing slices from the first postcontrast phase were extracted. Each axial image was subdivided into two subimages: one of the ipsilateral cancer-containing breast and one of the contralateral healthy breast. Cases were randomly divided into training, validation, and testing sets. A convolutional neural network was trained to classify subimages into “cancer” and “no cancer” categories. Accuracy, sensitivity, and specificity of the classification system were determined using pathology as the reference standard. A two-reader study was performed to measure the time savings of the deep learning algorithm using descriptive statistics. Results Two hundred and seventy-three patients with unilateral breast cancer met study criteria. On the held-out test set, accuracy of the deep learning system for tumor detection was 92.8% (648/706; 95% confidence interval: 89.7%–93.8%). Sensitivity and specificity were 89.5% and 94.3%, respectively. Readers spent 3 to 45 seconds to scroll to the tumor-containing slices without use of the deep learning algorithm. Conclusion In breast MR exams containing breast cancer, deep learning can be used to identify the tumor-containing slices. This technology may be integrated into the picture archiving and communication system to bypass scrolling when viewing stacked images, which can be helpful during nonsystematic image viewing, such as during interdisciplinary tumor board meetings.


2021 ◽  
Author(s):  
Dong Chuang Guo ◽  
Jun Gu ◽  
Jian He ◽  
Hai Rui Chu ◽  
Na Dong ◽  
...  

Abstract Background: Hematoma expansion is an independent predictor of patient outcome and mortality. The early diagnosis of hematoma expansion is crucial for selecting clinical treatment options This study aims to explore the value of a deep learning algorithm for the prediction of hematoma expansion from noncontrast Computed tomography(NCCT) scan through external validation.Methods: 102 NCCT images of Hypertensive intracerebral hemorrhage (HICH) patients diagnosed in our hospital were retrospectively reviewed. The initial Computed tomography (CT) scan images were evaluated by a commercial Artificial intelligence (AI) software using deep learning algorithm and radiologists respectively to predict hematoma expansion and the corresponding sensitivity and specificity of the two groups were calculated and compared, Pair-wise comparisons were conducted among gold standard hematoma expansion diagnosis time, AI software diagnosis time and doctors’ reading time.Results: Among 102 HICH patients, The sensitivity, specificity and accuracy of predicting hematoma expansion in the AI group were higher than those in the doctor group(80.0% vs 66.7%,73.6% vs 58.3%,75.5% vs 60.8%),with statistically significant difference (p<0.05).The AI diagnosis time (2.8 ± 0.3s) and the doctors’ diagnosis time (11.7 ± 0.3s) were both significantly shorter than the gold standard diagnosis time (14.5 ± 8.8h) (p <0.05), AI diagnosis time was significantly shorter than that of doctors (p<0.05).Conclusions: Deep learning algorithm could effectively predict hematoma expansion at an early stage from the initial CT scan images of HICH patients after onset with high sensitivity and specificity and greatly shortened diagnosis time, which provides a new, accurate, easy-to-use and fast method for the early prediction of hematoma expansion.


2019 ◽  
Vol 137 (9) ◽  
pp. 987 ◽  
Author(s):  
Varun Gulshan ◽  
Renu P. Rajan ◽  
Kasumi Widner ◽  
Derek Wu ◽  
Peter Wubbels ◽  
...  

2020 ◽  
Vol 41 (46) ◽  
pp. 4400-4411 ◽  
Author(s):  
Shen Lin ◽  
Zhigang Li ◽  
Bowen Fu ◽  
Sipeng Chen ◽  
Xi Li ◽  
...  

Abstract Aims Facial features were associated with increased risk of coronary artery disease (CAD). We developed and validated a deep learning algorithm for detecting CAD based on facial photos. Methods and results We conducted a multicentre cross-sectional study of patients undergoing coronary angiography or computed tomography angiography at nine Chinese sites to train and validate a deep convolutional neural network for the detection of CAD (at least one ≥50% stenosis) from patient facial photos. Between July 2017 and March 2019, 5796 patients from eight sites were consecutively enrolled and randomly divided into training (90%, n = 5216) and validation (10%, n = 580) groups for algorithm development. Between April 2019 and July 2019, 1013 patients from nine sites were enrolled in test group for algorithm test. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated using radiologist diagnosis as the reference standard. Using an operating cut point with high sensitivity, the CAD detection algorithm had sensitivity of 0.80 and specificity of 0.54 in the test group; the AUC was 0.730 (95% confidence interval, 0.699–0.761). The AUC for the algorithm was higher than that for the Diamond–Forrester model (0.730 vs. 0.623, P &lt; 0.001) and the CAD consortium clinical score (0.730 vs. 0.652, P &lt; 0.001). Conclusion Our results suggested that a deep learning algorithm based on facial photos can assist in CAD detection in this Chinese cohort. This technique may hold promise for pre-test CAD probability assessment in outpatient clinics or CAD screening in community. Further studies to develop a clinical available tool are warranted.


Sign in / Sign up

Export Citation Format

Share Document