scholarly journals Enhanced Visualization of Retinal Microvasculature in Optical Coherence Tomography Angiography Imaging via Deep Learning

2020 ◽  
Vol 9 (5) ◽  
pp. 1322 ◽  
Author(s):  
Shin Kadomoto ◽  
Akihito Uji ◽  
Yuki Muraoka ◽  
Tadamichi Akagi ◽  
Akitaka Tsujikawa

Background: To investigate the effects of deep learning denoising on quantitative vascular measurements and the quality of optical coherence tomography angiography (OCTA) images. Methods: U-Net-based deep learning denoising with an averaged OCTA data set as teacher data was used in this study. One hundred and thirteen patients with various retinal diseases were examined. An OCT HS-100 (Canon inc., Tokyo, Japan) performed a 3 × 3 mm2 superficial capillary plexus layer slab scan centered on the fovea 10 times. A single-shot image was defined as the original image and the 10-frame averaged image and denoised image generated from the original image using deep learning denoising for the analyses were obtained. The main parameters measured were the OCTA image acquisition time, contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), vessel density (VD), vessel length density (VLD), vessel diameter index (VDI), and fractal dimension (FD) of the original, averaged, and denoised images. Results: One hundred and twelve eyes of 108 patients were studied. Deep learning denoising removed the background noise and smoothed the rough vessel surface. The image acquisition times for the original, averaged, and denoised images were 16.6 ± 2.4, 285 ± 38, and 22.1 ± 2.4 s, respectively (P < 0.0001). The CNR and PSNR of the denoised image were significantly higher than those of the original image (P < 0.0001). There were significant differences in the VLD, VDI, and FD (P < 0.0001) after deep learning denoising. Conclusions: The deep learning denoising method achieved high speed and high quality OCTA imaging. This method may be a viable alternative to the multiple image averaging technique.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Reza Mirshahi ◽  
Pasha Anvari ◽  
Hamid Riazi-Esfahani ◽  
Mahsa Sardarinia ◽  
Masood Naseripour ◽  
...  

AbstractThe purpose of this study was to introduce a new deep learning (DL) model for segmentation of the fovea avascular zone (FAZ) in en face optical coherence tomography angiography (OCTA) and compare the results with those of the device’s built-in software and manual measurements in healthy subjects and diabetic patients. In this retrospective study, FAZ borders were delineated in the inner retinal slab of 3 × 3 enface OCTA images of 131 eyes of 88 diabetic patients and 32 eyes of 18 healthy subjects. To train a deep convolutional neural network (CNN) model, 126 enface OCTA images (104 eyes with diabetic retinopathy and 22 normal eyes) were used as training/validation dataset. Then, the accuracy of the model was evaluated using a dataset consisting of OCTA images of 10 normal eyes and 27 eyes with diabetic retinopathy. The CNN model was based on Detectron2, an open-source modular object detection library. In addition, automated FAZ measurements were conducted using the device’s built-in commercial software, and manual FAZ delineation was performed using ImageJ software. Bland–Altman analysis was used to show 95% limit of agreement (95% LoA) between different methods. The mean dice similarity coefficient of the DL model was 0.94 ± 0.04 in the testing dataset. There was excellent agreement between automated, DL model and manual measurements of FAZ in healthy subjects (95% LoA of − 0.005 to 0.026 mm2 between automated and manual measurement and 0.000 to 0.009 mm2 between DL and manual FAZ area). In diabetic eyes, the agreement between DL and manual measurements was excellent (95% LoA of − 0.063 to 0.095), however, there was a poor agreement between the automated and manual method (95% LoA of − 0.186 to 0.331). The presence of diabetic macular edema and intraretinal cysts at the fovea were associated with erroneous FAZ measurements by the device’s built-in software. In conclusion, the DL model showed an excellent accuracy in detection of FAZ border in enfaces OCTA images of both diabetic patients and healthy subjects. The DL and manual measurements outperformed the automated measurements of the built-in software.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Sripad Krishna Devalla ◽  
Giridhar Subramanian ◽  
Tan Hung Pham ◽  
Xiaofei Wang ◽  
Shamira Perera ◽  
...  

Abstract Optical coherence tomography (OCT) has become an established clinical routine for the in vivo imaging of the optic nerve head (ONH) tissues, that is crucial in the diagnosis and management of various ocular and neuro-ocular pathologies. However, the presence of speckle noise affects the quality of OCT images and its interpretation. Although recent frame-averaging techniques have shown to enhance OCT image quality, they require longer scanning durations, resulting in patient discomfort. Using a custom deep learning network trained with 2,328 ‘clean B-scans’ (multi-frame B-scans; signal averaged), and their corresponding ‘noisy B-scans’ (clean B-scans + Gaussian noise), we were able to successfully denoise 1,552 unseen single-frame (without signal averaging) B-scans. The denoised B-scans were qualitatively similar to their corresponding multi-frame B-scans, with enhanced visibility of the ONH tissues. The mean signal to noise ratio (SNR) increased from 4.02 ± 0.68 dB (single-frame) to 8.14 ± 1.03 dB (denoised). For all the ONH tissues, the mean contrast to noise ratio (CNR) increased from 3.50 ± 0.56 (single-frame) to 7.63 ± 1.81 (denoised). The mean structural similarity index (MSSIM) increased from 0.13 ± 0.02 (single frame) to 0.65 ± 0.03 (denoised) when compared with the corresponding multi-frame B-scans. Our deep learning algorithm can denoise a single-frame OCT B-scan of the ONH in under 20 ms, thus offering a framework to obtain superior quality OCT B-scans with reduced scanning times and minimal patient discomfort.


Author(s):  
Martin Pfister ◽  
Hannes Stegmann ◽  
Kornelia Schützenberger ◽  
Bhavapriya Jasmin Schäfer ◽  
Christine Hohenadl ◽  
...  

2020 ◽  
Vol 25 (12) ◽  
Author(s):  
Qiangjiang Hao ◽  
Kang Zhou ◽  
Jianlong Yang ◽  
Yan Hu ◽  
Zhengjie Chai ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document