Detecting Preperimetric Glaucoma with Standard Automated Perimetry Using a Deep Learning Classifier

Ophthalmology ◽  
2016 ◽  
Vol 123 (9) ◽  
pp. 1974-1980 ◽  
Author(s):  
Ryo Asaoka ◽  
Hiroshi Murata ◽  
Aiko Iwase ◽  
Makoto Araie
2020 ◽  
pp. 112067212097734
Author(s):  
Delaram Mirzania ◽  
Atalie C Thompson ◽  
Kelly W Muir

Glaucoma is the leading cause of irreversible blindness and disability worldwide. Nevertheless, the majority of patients do not know they have the disease and detection of glaucoma progression using standard technology remains a challenge in clinical practice. Artificial intelligence (AI) is an expanding field that offers the potential to improve diagnosis and screening for glaucoma with minimal reliance on human input. Deep learning (DL) algorithms have risen to the forefront of AI by providing nearly human-level performance, at times exceeding the performance of humans for detection of glaucoma on structural and functional tests. A succinct summary of present studies and challenges to be addressed in this field is needed. Following PRISMA guidelines, we conducted a systematic review of studies that applied DL methods for detection of glaucoma using color fundus photographs, optical coherence tomography (OCT), or standard automated perimetry (SAP). In this review article we describe recent advances in DL as applied to the diagnosis of glaucoma and glaucoma progression for application in screening and clinical settings, as well as the challenges that remain when applying this novel technique in glaucoma.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jinho Lee ◽  
Yong Woo Kim ◽  
Ahnul Ha ◽  
Young Kook Kim ◽  
Ki Ho Park ◽  
...  

AbstractVisual field assessment is recognized as the important criterion of glaucomatous damage judgement; however, it can show large test–retest variability. We developed a deep learning (DL) algorithm that quantitatively predicts mean deviation (MD) of standard automated perimetry (SAP) from monoscopic optic disc photographs (ODPs). A total of 1200 image pairs (ODPs and SAP results) for 563 eyes of 327 participants were enrolled. A DL model was built by combining a pre-trained DL network and subsequently trained fully connected layers. The correlation coefficient and mean absolute error (MAE) between the predicted and measured MDs were calculated. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the detection ability for glaucomatous visual field (VF) loss. The data were split into training/validation (1000 images) and testing (200 images) sets to evaluate the performance of the algorithm. The predicted MD showed a strong correlation and good agreement with the actual MD (correlation coefficient = 0.755; R2 = 57.0%; MAE = 1.94 dB). The model also accurately predicted the presence of glaucomatous VF loss (AUC 0.953). The DL algorithm showed great feasibility for prediction of MD and detection of glaucomatous functional loss from ODPs.


2014 ◽  
Vol 55 (12) ◽  
pp. 7814-7820 ◽  
Author(s):  
R. Asaoka ◽  
A. Iwase ◽  
K. Hirasawa ◽  
H. Murata ◽  
M. Araie

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Youngbin Na ◽  
Do-Kyeong Ko

AbstractStructured light with spatial degrees of freedom (DoF) is considered a potential solution to address the unprecedented demand for data traffic, but there is a limit to effectively improving the communication capacity by its integer quantization. We propose a data transmission system using fractional mode encoding and deep-learning decoding. Spatial modes of Bessel-Gaussian beams separated by fractional intervals are employed to represent 8-bit symbols. Data encoded by switching phase holograms is efficiently decoded by a deep-learning classifier that only requires the intensity profile of transmitted modes. Our results show that the trained model can simultaneously recognize two independent DoF without any mode sorter and precisely detect small differences between fractional modes. Moreover, the proposed scheme successfully achieves image transmission despite its densely packed mode space. This research will present a new approach to realizing higher data rates for advanced optical communication systems.


2021 ◽  
Vol 83 ◽  
pp. 184-193
Author(s):  
R. Ricciardi ◽  
G. Mettivier ◽  
M. Staffa ◽  
A. Sarno ◽  
G. Acampora ◽  
...  

2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

Abstract Background We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. Methods We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. Results At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85), and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73–0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in Centre 2. Conclusions This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.


Sign in / Sign up

Export Citation Format

Share Document