scholarly journals Development and clinical deployment of a smartphone-based visual field deep learning system for glaucoma detection

2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Fei Li ◽  
Diping Song ◽  
Han Chen ◽  
Jian Xiong ◽  
Xingyi Li ◽  
...  

Abstract By 2040, ~100 million people will have glaucoma. To date, there are a lack of high-efficiency glaucoma diagnostic tools based on visual fields (VFs). Herein, we develop and evaluate the performance of ‘iGlaucoma’, a smartphone application-based deep learning system (DLS) in detecting glaucomatous VF changes. A total of 1,614,808 data points of 10,784 VFs (5542 patients) from seven centers in China were included in this study, divided over two phases. In Phase I, 1,581,060 data points from 10,135 VFs of 5105 patients were included to train (8424 VFs), validate (598 VFs) and test (3 independent test sets—200, 406, 507 samples) the diagnostic performance of the DLS. In Phase II, using the same DLS, iGlaucoma cloud-based application further tested on 33,748 data points from 649 VFs of 437 patients from three glaucoma clinics. With reference to three experienced expert glaucomatologists, the diagnostic performance (area under curve [AUC], sensitivity and specificity) of the DLS and six ophthalmologists were evaluated in detecting glaucoma. In Phase I, the DLS outperformed all six ophthalmologists in the three test sets (AUC of 0.834–0.877, with a sensitivity of 0.831–0.922 and a specificity of 0.676–0.709). In Phase II, iGlaucoma had 0.99 accuracy in recognizing different patterns in pattern deviation probability plots region, with corresponding AUC, sensitivity and specificity of 0.966 (0.953–0.979), 0.954 (0.930–0.977), and 0.873 (0.838–0.908), respectively. The ‘iGlaucoma’ is a clinically effective glaucoma diagnostic tool to detect glaucoma from humphrey VFs, although the target population will need to be carefully identified with glaucoma expertise input.

2021 ◽  
Author(s):  
YOONJE LEE ◽  
Yu-Seop KIM ◽  
Da-in Lee ◽  
Seri Jeong ◽  
Gu-Hyun Kang ◽  
...  

Abstract Reducing the time to diagnose COVID-19 helps to manage insufficient isolation-bed resources and adequately accommodate critically ill patients in clinical fields. There is currently no alternative method to RT-PCR, which requires 40 cycles to diagnose COVID-19. We proposed a deep learning (DL) model to improve the speed of COVID-19 RT-PCR diagnosis. We developed and tested a DL model using the long-short term memory method with a dataset of fluorescence values measured in each cycle of 5,810 RT-PCR tests. Among the DL models developed here, the diagnostic performance of the 21st model showed an area under the receiver operating characteristic (AUROC), sensitivity, and specificity of 84.55%, 93.33%, and 75.72%, respectively. The diagnostic performance of the 24th model showed an AUROC sensitivity, and specificity of 91.27%, 90.00%, and 92.54%, respectively.


2021 ◽  
pp. 20200513
Author(s):  
Su-Jin Jeon ◽  
Jong-Pil Yun ◽  
Han-Gyeol Yeom ◽  
Woo-Sang Shin ◽  
Jong-Hyun Lee ◽  
...  

Objective: The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for predicting C-shaped canals in mandibular second molars on panoramic radiographs. Methods: Panoramic and cone beam CT (CBCT) images obtained from June 2018 to May 2020 were screened and 1020 patients were selected. Our dataset of 2040 sound mandibular second molars comprised 887 C-shaped canals and 1153 non-C-shaped canals. To confirm the presence of a C-shaped canal, CBCT images were analyzed by a radiologist and set as the gold standard. A CNN-based deep-learning model for predicting C-shaped canals was built using Xception. The training and test sets were set to 80 to 20%, respectively. Diagnostic performance was evaluated using accuracy, sensitivity, specificity, and precision. Receiver-operating characteristics (ROC) curves were drawn, and the area under the curve (AUC) values were calculated. Further, gradient-weighted class activation maps (Grad-CAM) were generated to localize the anatomy that contributed to the predictions. Results: The accuracy, sensitivity, specificity, and precision of the CNN model were 95.1, 92.7, 97.0, and 95.9%, respectively. Grad-CAM analysis showed that the CNN model mainly identified root canal shapes converging into the apex to predict the C-shaped canals, while the root furcation was predominantly used for predicting the non-C-shaped canals. Conclusions: The deep-learning system had significant accuracy in predicting C-shaped canals of mandibular second molars on panoramic radiographs.


2021 ◽  
pp. bjophthalmol-2020-316290
Author(s):  
Bing Li ◽  
Huan Chen ◽  
Bilei Zhang ◽  
Mingzhen Yuan ◽  
Xuemin Jin ◽  
...  

AimTo explore and evaluate an appropriate deep learning system (DLS) for the detection of 12 major fundus diseases using colour fundus photography.MethodsDiagnostic performance of a DLS was tested on the detection of normal fundus and 12 major fundus diseases including referable diabetic retinopathy, pathologic myopic retinal degeneration, retinal vein occlusion, retinitis pigmentosa, retinal detachment, wet and dry age-related macular degeneration, epiretinal membrane, macula hole, possible glaucomatous optic neuropathy, papilledema and optic nerve atrophy. The DLS was developed with 56 738 images and tested with 8176 images from one internal test set and two external test sets. The comparison with human doctors was also conducted.ResultsThe area under the receiver operating characteristic curves of the DLS on the internal test set and the two external test sets were 0.950 (95% CI 0.942 to 0.957) to 0.996 (95% CI 0.994 to 0.998), 0.931 (95% CI 0.923 to 0.939) to 1.000 (95% CI 0.999 to 1.000) and 0.934 (95% CI 0.929 to 0.938) to 1.000 (95% CI 0.999 to 1.000), with sensitivities of 80.4% (95% CI 79.1% to 81.6%) to 97.3% (95% CI 96.7% to 97.8%), 64.6% (95% CI 63.0% to 66.1%) to 100% (95% CI 100% to 100%) and 68.0% (95% CI 67.1% to 68.9%) to 100% (95% CI 100% to 100%), respectively, and specificities of 89.7% (95% CI 88.8% to 90.7%) to 98.1% (95%CI 97.7% to 98.6%), 78.7% (95% CI 77.4% to 80.0%) to 99.6% (95% CI 99.4% to 99.8%) and 88.1% (95% CI 87.4% to 88.7%) to 98.7% (95% CI 98.5% to 99.0%), respectively. When compared with human doctors, the DLS obtained a higher diagnostic sensitivity but lower specificity.ConclusionThe proposed DLS is effective in diagnosing normal fundus and 12 major fundus diseases, and thus has much potential for fundus diseases screening in the real world.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Dani Kiyasseh ◽  
Tingting Zhu ◽  
David Clifton

AbstractDeep learning algorithms trained on instances that violate the assumption of being independent and identically distributed (i.i.d.) are known to experience destructive interference, a phenomenon characterized by a degradation in performance. Such a violation, however, is ubiquitous in clinical settings where data are streamed temporally from different clinical sites and from a multitude of physiological sensors. To mitigate this interference, we propose a continual learning strategy, entitled CLOPS, that employs a replay buffer. To guide the storage of instances into the buffer, we propose end-to-end trainable parameters, termed task-instance parameters, that quantify the difficulty with which data points are classified by a deep-learning system. We validate the interpretation of these parameters via clinical domain knowledge. To replay instances from the buffer, we exploit uncertainty-based acquisition functions. In three of the four continual learning scenarios, reflecting transitions across diseases, time, data modalities, and healthcare institutions, we show that CLOPS outperforms the state-of-the-art methods, GEM1 and MIR2. We also conduct extensive ablation studies to demonstrate the necessity of the various components of our proposed strategy. Our framework has the potential to pave the way for diagnostic systems that remain robust over time.


2020 ◽  
pp. 147592172091722
Author(s):  
Hyunjin Bae ◽  
Keunyoung Jang ◽  
Yun-Kyu An

This article proposes a new end-to-end deep super-resolution crack network (SrcNet) for improving computer vision–based automated crack detectability. The digital images acquired from large-scale civil infrastructures for crack detection using unmanned robots often suffer from motion blur and lack of pixel resolution, which may degrade the corresponding crack detectability. The proposed SrcNet is able to significantly enhance the crack detectability by augmenting the pixel resolution of the raw digital image through deep learning. SrcNet basically consists of two phases: phase I—deep learning–based super resolution (SR) image generation and phase II—deep learning–based automated crack detection. Once the raw digital images are obtained from a target bridge surface, phase I of SrcNet generates the corresponding SR images to the raw digital images. Then, phase II automatically detects cracks from the generated SR images, making it possible to remarkably improve the crack detectability. SrcNet is experimentally validated using the digital images obtained using a climbing robot and an unmanned aerial vehicle from in situ concrete bridges located in South Korea. The validation test results reveal that the proposed SrcNet shows 24% better crack detectability compared to the crack detection results using the raw digital images.


2020 ◽  
Vol 10 (14) ◽  
pp. 4716 ◽  
Author(s):  
Mohamed Ramzy Ibrahim ◽  
Karma M. Fathalla ◽  
Sherin M. Youssef

Optical Coherence Tomography (OCT) imaging has major advantages in effectively identifying the presence of various ocular pathologies and detecting a wide range of macular diseases. OCT examinations can aid in the detection of many retina disorders in early stages that could not be detected in traditional retina images. In this paper, a new hybrid computer-aided OCT diagnostic system (HyCAD) is proposed for classification of Diabetic Macular Edema (DME), Choroidal Neovascularization (CNV) and drusen disorders, while separating them from Normal OCT images. The proposed HyCAD hybrid learning system integrates the segmentation of Region of Interest (RoI), based on central serious chorioretinopathy (CSC) in Spectral Domain Optical Coherence Tomography (SD-OCT) images, with deep learning architectures for effective diagnosis of retinal disorders. The proposed system assimilates a range of techniques including RoI localization and feature extraction, followed by classification and diagnosis. An efficient feature fusion phase has been introduced for combining the OCT image features, extracted by Deep Convolutional Neural Network (CNN), with the features extracted from the RoI segmentation phase. This fused feature set is used to predict multiclass OCT retina disorders. The proposed segmentation phase of retinal RoI regions adds substantial contribution as it draws attention to the most significant areas that are candidate for diagnosis. A new modified deep learning architecture (Norm-VGG16) is introduced integrating a kernel regularizer. Norm-VGG16 is trained from scratch on a large benchmark dataset and used in RoI localization and segmentation. Various experiments have been carried out to illustrate the performance of the proposed system. Large Dataset of Labeled Optical Coherence Tomography (OCT) v3 benchmark is used to validate the efficiency of the model compared with others in literature. The experimental results show that the proposed model achieves relatively high-performance in terms of accuracy, sensitivity and specificity. An average accuracy, sensitivity and specificity of 98.8%, 99.4% and 98.2% is achieved, respectively. The remarkable performance achieved reflects that the fusion phase can effectively improve the identification ratio of the urgent patients’ diagnostic images and clinical data. In addition, an outstanding performance is achieved compared to others in literature.


Author(s):  
Sarah Eskreis-Winkler ◽  
Natsuko Onishi ◽  
Katja Pinker ◽  
Jeffrey S Reiner ◽  
Jennifer Kaplan ◽  
...  

Abstract Objective To investigate the feasibility of using deep learning to identify tumor-containing axial slices on breast MRI images. Methods This IRB–approved retrospective study included consecutive patients with operable invasive breast cancer undergoing pretreatment breast MRI between January 1, 2014, and December 31, 2017. Axial tumor-containing slices from the first postcontrast phase were extracted. Each axial image was subdivided into two subimages: one of the ipsilateral cancer-containing breast and one of the contralateral healthy breast. Cases were randomly divided into training, validation, and testing sets. A convolutional neural network was trained to classify subimages into “cancer” and “no cancer” categories. Accuracy, sensitivity, and specificity of the classification system were determined using pathology as the reference standard. A two-reader study was performed to measure the time savings of the deep learning algorithm using descriptive statistics. Results Two hundred and seventy-three patients with unilateral breast cancer met study criteria. On the held-out test set, accuracy of the deep learning system for tumor detection was 92.8% (648/706; 95% confidence interval: 89.7%–93.8%). Sensitivity and specificity were 89.5% and 94.3%, respectively. Readers spent 3 to 45 seconds to scroll to the tumor-containing slices without use of the deep learning algorithm. Conclusion In breast MR exams containing breast cancer, deep learning can be used to identify the tumor-containing slices. This technology may be integrated into the picture archiving and communication system to bypass scrolling when viewing stacked images, which can be helpful during nonsystematic image viewing, such as during interdisciplinary tumor board meetings.


2021 ◽  
Vol 10 (19) ◽  
pp. 4508
Author(s):  
Yoshitaka Kise ◽  
Chiaki Kuwada ◽  
Yoshiko Ariji ◽  
Munetaka Naitoh ◽  
Eiichiro Ariji

This study was performed to evaluate the diagnostic performance of deep learning systems using ultrasonography (USG) images of the submandibular glands (SMGs) in three different conditions: obstructive sialoadenitis, Sjögren’s syndrome (SjS), and normal glands. Fifty USG images with a confirmed diagnosis of obstructive sialoadenitis, 50 USG images with a confirmed diagnosis of SjS, and 50 USG images with no SMG abnormalities were included in the study. The training group comprised 40 obstructive sialoadenitis images, 40 SjS images, and 40 control images, and the test group comprised 10 obstructive sialoadenitis images, 10 SjS images, and 10 control images for deep learning analysis. The performance of the deep learning system was calculated and compared between two experienced radiologists. The sensitivity of the deep learning system in the obstructive sialoadenitis group, SjS group, and control group was 55.0%, 83.0%, and 73.0%, respectively, and the total accuracy was 70.3%. The sensitivity of the two radiologists was 64.0%, 72.0%, and 86.0%, respectively, and the total accuracy was 74.0%. This study revealed that the deep learning system was more sensitive than experienced radiologists in diagnosing SjS in USG images of two case groups and a group of healthy subjects in inflammation of SMGs.


Neurology ◽  
2021 ◽  
pp. 10.1212/WNL.0000000000012226
Author(s):  
Caroline Vasseneix ◽  
Raymond P Najjar ◽  
Xinxing Xu ◽  
Zhiqun Tang ◽  
Jing Liang Loo ◽  
...  

Objective:To evaluate the performance of a deep learning system (DLS) in classifying the severity of papilledema associated with increased intracranial pressure, on standard retinal fundus photographs.Methods:A DLS was trained to automatically classify papilledema severity in 965 patients (2103 mydriatic fundus photographs), representing a multiethnic cohort of patients with confirmed elevated intracranial pressure. Training was performed on 1052 photographs with mild/moderate papilledema (MP) and 1051 photographs with severe papilledema (SP) classified by a panel of experts. The performance of the DLS and that of three independent neuro-ophthalmologists were tested in 111 patients (214 photographs, 92 with MP and 122 with SP), by calculating the area under the receiver operating characteristics curve (AUC), accuracy, sensitivity and specificity. Kappa agreement scores between the DLS and each of the three graders and among the three graders were calculated.Results:The DLS successfully discriminated between photographs of MP and SP, with an AUC of 0.93 (95%CI: 0.89-0.96) and an accuracy, sensitivity and specificity of 87.9%, 91.8% and 86.2%, respectively. This performance was comparable with that of the three neuro-ophthalmologists (84.1%, 91.8% and 73.9%, P=0.19, P=1, P=0.09, respectively). Misclassification by the DLS was mainly observed for moderate papilledema (Frisén grade 3). Agreement scores between the DLS and the neuro-ophthalmologists’ evaluation was 0.62 (CI 95% 0.57-0.68), whereas the inter-grader agreement among the three neuro-ophthalmologists was 0.54 (CI 95% 0.47-0.62).Conclusions:Our DLS accurately classified the severity of papilledema on an independent set of mydriatic fundus photographs, achieving a comparable performance with that of independent neuro-ophthalmologists.Classification of Evidence:This study provides Class II evidence that a deep learning system using mydriatic retinal fundus photographs accurately classified the severity of papilledema associated in patients with a diagnosis of increased intracranial pressure.


2019 ◽  
Vol 48 (6) ◽  
pp. 20190019 ◽  
Author(s):  
Yoshitaka Kise ◽  
Haruka Ikeda ◽  
Takeshi Fujii ◽  
Motoki Fukuda ◽  
Yoshiko Ariji ◽  
...  

Objectives: This study estimated the diagnostic performance of a deep learning system for detection of Sjögren's syndrome (SjS) on CT, and compared it with the performance of radiologists. Methods: CT images were assessed from 25 patients confirmed to have SjS based on the both Japanese criteria and American-European Consensus Group criteria and 25 control subjects with no parotid gland abnormalities who were examined for other diseases. 10 CT slices were obtained for each patient. From among the total of 500 CT images, 400 images (200 from 20 SjS patients and 200 from 20 control subjects) were employed as the training data set and 100 images (50 from 5 SjS patients and 50 from 5 control subjects) were used as the test data set. The performance of a deep learning system for diagnosing SjS from the CT images was compared with the diagnoses made by six radiologists (three experienced and three inexperienced radiologists). Results: The accuracy, sensitivity, and specificity of the deep learning system were 96.0%, 100% and 92.0%, respectively. The corresponding values of experienced radiologists were 98.3%, 99.3% and 97.3% being equivalent to the deep learning, while those of inexperienced radiologists were 83.5%, 77.9% and 89.2%. The area under the curve of inexperienced radiologists were significantly different from those of the deep learning system and the experienced radiologists. Conclusions: The deep learning system showed a high diagnostic performance for SjS, suggesting that it could possibly be used for diagnostic support when interpreting CT images.


Sign in / Sign up

Export Citation Format

Share Document