scholarly journals Dense Optic Nerve Head Deformation Estimated using CNN as a Structural Biomarker of Glaucoma Progression

Author(s):  
Ali Salehi ◽  
Madhusudhanan Balasubramanian

Purpose: To present a new structural biomarker for detecting glaucoma progression based on structural transformation of the optic nerve head (ONH) region. Methods: A dense ONH deformation was estimated using deep learning methods namely DDCNet-Multires, FlowNet2, and FlowNet-Correlation, and legacy computational methods namely the topographic change analysis (TCA) and proper orthogonal decomposition (POD) methods using longitudinal confocal scans of the ONH for each study eye. A candidate structural biomarker of glaucoma progression in a study eye was estimated as average magnitude of flow velocities within the ONH region. The biomarker was evaluated using longitudinal confocal scans of 12 laser-treated and 12 contralateral normal eyes of 12 primates from the LSU Experimental Glaucoma Study (LEGS); and 36 progressing eyes and 21 longitudinal normal eyes from the UCSD Diagnostic Innovations in Glaucoma Study (DIGS). Area under the ROC curves (AUC) was used to assess the diagnostic accuracy of the candidate biomarker. Results: AUROC (95\% CI) for LEGS were: 0.83 (0.79, 0.88) for DDCNet-Multires; 0.83 (0.78, 0.88) for FlowNet2; 0.83 (0.78, 0.88) for FlowNet-Correlation; 0.94 (0.91, 0.97) for POD; and 0.86 (0.82, 0.91) for TCA methods. For DIGS: 0.89 (0.80, 0.97) for DDCNet-Multires; 0.82 (0.71, 0.93) for FlowNet2; 0.93 (0.86, 0.99) for FlowNet-Correlation; 0.86 (0.76, 0.96) for POD; and 0.86 (0.77, 0.95) for TCA methods. Lower diagnostic accuracy of the learning-based methods for LEG study eyes were due to image alignment errors in confocal sequences. Conclusion: Deep learning methods trained to estimate generic deformation were able to detect ONH deformation from confocal images and provided a higher diagnostic accuracy when compared to the classical optical flow and legacy biomarkers of glaucoma progression. Because it is difficult to validate the estimates of dense ONH deformation in clinical population, our validation using ONH sequences under controlled experimental conditions confirms the diagnostic accuracy of the biomarkers observed in the clinical population. Performance of these deep learning methods can be further improved by fine-tuning these networks using longitudinal ONH sequences instead of training the network to be a general-purpose deformation estimator.

2020 ◽  
Vol 10 (11) ◽  
pp. 3833 ◽  
Author(s):  
Haidar Almubarak ◽  
Yakoub Bazi ◽  
Naif Alajlan

In this paper, we propose a method for localizing the optic nerve head and segmenting the optic disc/cup in retinal fundus images. The approach is based on a simple two-stage Mask-RCNN compared to sophisticated methods that represent the state-of-the-art in the literature. In the first stage, we detect and crop around the optic nerve head then feed the cropped image as input for the second stage. The second stage network is trained using a weighted loss to produce the final segmentation. To further improve the detection in the first stage, we propose a new fine-tuning strategy by combining the cropping output of the first stage with the original training image to train a new detection network using different scales for the region proposal network anchors. We evaluate the method on Retinal Fundus Images for Glaucoma Analysis (REFUGE), Magrabi, and MESSIDOR datasets. We used the REFUGE training subset to train the models in the proposed method. Our method achieved 0.0430 mean absolute error in the vertical cup-to-disc ratio (MAE vCDR) on the REFUGE test set compared to 0.0414 obtained using complex and multiple ensemble networks methods. The models trained with the proposed method transfer well to datasets outside REFUGE, achieving a MAE vCDR of 0.0785 and 0.077 on MESSIDOR and Magrabi datasets, respectively, without being retrained. In terms of detection accuracy, the proposed new fine-tuning strategy improved the detection rate from 96.7% to 98.04% on MESSIDOR and from 93.6% to 100% on Magrabi datasets compared to the reported detection rates in the literature.


2018 ◽  
Vol 103 (10) ◽  
pp. 1401-1405
Author(s):  
Lucas A Torres ◽  
Faisal Jarrar ◽  
Glen P Sharpe ◽  
Donna M Hutchison ◽  
Eduardo Ferracioli-Oda ◽  
...  

Background/aimsOptical coherence tomography (OCT) imaging of the optic nerve head minimum rim width (MRW) has recently been shown to sometimes contain components besides extended retinal nerve fibre layer (RNFL). This study was conducted to determine whether excluding these components, termed protruded retinal layers (PRLs), from MRW increases diagnostic accuracy for detecting glaucoma.MethodsIn this cross-sectional study, we included 123 patients with glaucoma and 123 normal age-similar controls with OCT imaging of the optic nerve head (24 radial scans) and RNFL (circle scan). When present, PRLs were manually segmented, and adjusted MRW measurements were computed. We compared diagnostic accuracy of adjusted versus unadjusted MRW measurement. We also determined whether adjusted MRW correlates better with RNFL thickness compared with unadjusted MRW.ResultsThe median (IQR) visual field mean deviation of patients and controls was −4.4 (−10.3 to −2.1) dB and 0.0 (−0.6 to 0.8) dB, respectively. In the 5904 individual B-scans, PRLs were identified less frequently in patients (448, 7.6%) compared with controls (728, 12.3%; p<0.01) and were present most frequently in the temporal sector of both groups. Areas under the receiver operating characteristic curves and sensitivity values at 95% specificity indicated that PRL adjustment did not improve diagnostic accuracy of MRW, globally or temporally. Furthermore, adjusting MRW for PRL did not improve its correlation with RNFL thickness in either group.ConclusionWhile layers besides the RNFL are sometimes included in OCT measurements of MRW, subtracting these layers does not impact clinical utility.


2018 ◽  
Vol 9 (7) ◽  
pp. 3244 ◽  
Author(s):  
Sripad Krishna Devalla ◽  
Prajwal K. Renukanand ◽  
Bharathwaj K. Sreedhar ◽  
Giridhar Subramanian ◽  
Liang Zhang ◽  
...  

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Sripad Krishna Devalla ◽  
Giridhar Subramanian ◽  
Tan Hung Pham ◽  
Xiaofei Wang ◽  
Shamira Perera ◽  
...  

Abstract Optical coherence tomography (OCT) has become an established clinical routine for the in vivo imaging of the optic nerve head (ONH) tissues, that is crucial in the diagnosis and management of various ocular and neuro-ocular pathologies. However, the presence of speckle noise affects the quality of OCT images and its interpretation. Although recent frame-averaging techniques have shown to enhance OCT image quality, they require longer scanning durations, resulting in patient discomfort. Using a custom deep learning network trained with 2,328 ‘clean B-scans’ (multi-frame B-scans; signal averaged), and their corresponding ‘noisy B-scans’ (clean B-scans + Gaussian noise), we were able to successfully denoise 1,552 unseen single-frame (without signal averaging) B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean signal to noise ratio (SNR) increased from 4.02 ± 0.68 dB (single-frame) to 8.14 ± 1.03 dB (denoised). For all the ONH tissues, the mean contrast to noise ratio (CNR) increased from 3.50 ± 0.56 (single-frame) to 7.63 ± 1.81 (denoised). The mean structural similarity index (MSSIM) increased from 0.13 ± 0.02 (single frame) to 0.65 ± 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort.


Ophthalmology ◽  
2020 ◽  
Vol 127 (3) ◽  
pp. 346-356 ◽  
Author(s):  
Mark Christopher ◽  
Christopher Bowd ◽  
Akram Belghith ◽  
Michael H. Goldbaum ◽  
Robert N. Weinreb ◽  
...  

2021 ◽  
Author(s):  
Caroline Vasseneix ◽  
Simon Nusinovici ◽  
Xinxing Xu ◽  
Jeong Min Hwang ◽  
Steffen Hamann ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document