Real-time retinal layer segmentation of adaptive optics optical coherence tomography angiography with deep learning

Author(s):  
Yifan Jian ◽  
Svetlana Borkovkina ◽  
Worawee Japongsori ◽  
Acner Camino ◽  
Marinko V. Sarunic
2020 ◽  
Vol 13 (8) ◽  
Author(s):  
Worawee Janpongsri ◽  
Joey Huang ◽  
Ringo Ng ◽  
Daniel J. Wahl ◽  
Marinko V. Sarunic ◽  
...  

In the field of ophthalmology, optical coherence tomography (OCT) has proven to be a powerful imaging technique when it comes to diagnosing various eye-related diseases. This research article introduces a real-time automatic retinal layer segmentation algorithm based on intensity variation in the OCT images. The built algorithm is capable of detecting internal retinal layers like the internal limiting membrane (ILM), the retinal pigment epithelium (RPE) and the retinal nerve fiber layer (RNFL) with micrometer level precision, the algorithm uses openMP for parallelized computation for real-time visualization of the segmented retinal layers. The total execution time of the algorithm was evaluated using various image sizes and compared with the OCT frame rate to demonstrate the efficiency of real-time segmentation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Reza Mirshahi ◽  
Pasha Anvari ◽  
Hamid Riazi-Esfahani ◽  
Mahsa Sardarinia ◽  
Masood Naseripour ◽  
...  

AbstractThe purpose of this study was to introduce a new deep learning (DL) model for segmentation of the fovea avascular zone (FAZ) in en face optical coherence tomography angiography (OCTA) and compare the results with those of the device’s built-in software and manual measurements in healthy subjects and diabetic patients. In this retrospective study, FAZ borders were delineated in the inner retinal slab of 3 × 3 enface OCTA images of 131 eyes of 88 diabetic patients and 32 eyes of 18 healthy subjects. To train a deep convolutional neural network (CNN) model, 126 enface OCTA images (104 eyes with diabetic retinopathy and 22 normal eyes) were used as training/validation dataset. Then, the accuracy of the model was evaluated using a dataset consisting of OCTA images of 10 normal eyes and 27 eyes with diabetic retinopathy. The CNN model was based on Detectron2, an open-source modular object detection library. In addition, automated FAZ measurements were conducted using the device’s built-in commercial software, and manual FAZ delineation was performed using ImageJ software. Bland–Altman analysis was used to show 95% limit of agreement (95% LoA) between different methods. The mean dice similarity coefficient of the DL model was 0.94 ± 0.04 in the testing dataset. There was excellent agreement between automated, DL model and manual measurements of FAZ in healthy subjects (95% LoA of − 0.005 to 0.026 mm2 between automated and manual measurement and 0.000 to 0.009 mm2 between DL and manual FAZ area). In diabetic eyes, the agreement between DL and manual measurements was excellent (95% LoA of − 0.063 to 0.095), however, there was a poor agreement between the automated and manual method (95% LoA of − 0.186 to 0.331). The presence of diabetic macular edema and intraretinal cysts at the fovea were associated with erroneous FAZ measurements by the device’s built-in software. In conclusion, the DL model showed an excellent accuracy in detection of FAZ border in enfaces OCTA images of both diabetic patients and healthy subjects. The DL and manual measurements outperformed the automated measurements of the built-in software.


PLoS ONE ◽  
2016 ◽  
Vol 11 (9) ◽  
pp. e0162001 ◽  
Author(s):  
Louise Terry ◽  
Nicola Cassels ◽  
Kelly Lu ◽  
Jennifer H. Acton ◽  
Tom H. Margrain ◽  
...  

2018 ◽  
Vol 7 (2.25) ◽  
pp. 56
Author(s):  
Mohandass G ◽  
Hari Krishnan G ◽  
Hemalatha R J

The optical coherence tomography (OCT) imaging technique is a precise and well-known approach to the diagnosis of retinal layers. The pathological changes in the retina challenge the accuracy of computational segmentation approaches in the evaluation and identification of defects in the boundary layer. The layer segmentations and boundary detections are distorted by noise in the computation. In this work, we propose a fully automated segmentation algorithm using a denoising technique called the Boisterous Obscure Ratio (BOR) for human and mammal retina. First, the BOR is derived using noise detection, i.e., from the Robust Outlyingness Ratio (ROR). It is then applied to edge and layer detection using a gradient-based deformable contour model. Second, the image is vectorised. In this method, a cluster and column intensity grid is applied to identify and determine the unsegmented layers. Using the layer intensity and a region growth seed point algorithm, segmentation of the prominent layers is achieved. The automatic BOR method is an image segmentation process that determines the eight layers in retinal spectral domain optical coherence tomography images. The highlight of the BOR method is that the results produced are accurate, highly substantial, and effective, although time consuming. 


Sign in / Sign up

Export Citation Format

Share Document