scholarly journals Deep-Learning Based, Automated Segmentation of Macular Edema in Optical Coherence Tomography

2017 ◽  
Author(s):  
Cecilia S. Lee ◽  
Ariel J. Tyring ◽  
Nicolaas P. Deruyter ◽  
Yue Wu ◽  
Ariel Rokem ◽  
...  

AbstractEvaluation of clinical images is essential for diagnosis in many specialties and the development of computer vision algorithms to analyze biomedical images will be important. In ophthalmology, optical coherence tomography (OCT) is critical for managing retinal conditions. We developed a convolutional neural network (CNN) that detects intraretinal fluid (IRF) on OCT in a manner indistinguishable from clinicians. Using 1,289 OCT images, the CNN segmented images with a 0.911 cross-validated Dice coefficient, compared with segmentations by experts. Additionally, the agreement between experts and between experts and CNN were similar. Our results reveal that CNN can be trained to perform automated segmentations.

Proceedings ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 34
Author(s):  
Plácido Vidal ◽  
Joaquim Moura ◽  
Jorge Novo ◽  
Marcos Ortega

Hereby we present a methodology with the objective of detecting retinal fluid accumulations in between the retinal layers. The methodology uses a robust Densely Connected Neural Network to classify thousands of subsamples, extracted from a given Optical Coherence Tomography image. Posteriorly, using the detected regions, it satisfactorily generates a coherent and intuitive confidence map by means of a voting strategy.


EP Europace ◽  
2020 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Liang ◽  
A Haeberlin

Abstract Background The immediate effect of radiofrequency catheter ablation (RFA) on the tissue is not directly visualized. Optical coherence tomography (OCT) is an imaging technique that uses light to capture histology-like images with a penetration depth of 1-3 mm in the cardiac tissue. There are two specific features of ablation lesions in the OCT images: the disappearance of birefringence artifacts in the lateral and sudden decrease of signal at the bottom (Figure panel A and D). These features can not only be used to recognize the ablation lesions from the OCT images by eye, but also be used to train a machine learning model for automatic lesion segmentation. In recent years, deep learning methods, e.g. convolutional neural networks, have been used in medical image analysis and greatly increased the accuracy of image segmentation. We hypothesize that using a convolutional neural network, e.g. U-Net, can locate and segment the ablation lesions in the OCT images. Purpose To investigate whether a deep learning method such as a convolutional neural network optimized for biomedical image processing, could be used to segment ablation lesions in OCT images automatically. Method 8 OCT datasets with ablation lesions were used for training the convolutional neural network (U-Net model). After training, the model was validated by two new OCT datasets. Dice coefficients were calculated to evaluate spatial overlap between the predictions and the ground truth segmentations, which were manually segmented by the researchers (its value ranges from 0 to 1, and "1" means perfect segmentation). Results The U-Net model could predict the central parts of lesions automatically and accurately (Dice coefficients are 0.933 and 0.934), compared with the ground truth segmentations (Figure panel B and E). These predictions could reveal the depths and diameters of the ablation lesions correctly (Figure panel C and F). Conclusions  Our results showed that deep learning could facilitate ablation lesion identification and segmentation in OCT images. Deep learning methods, integrated in an OCT system, might enable automatic and precise ablation lesion visualization, which may help to assess ablation lesions during radiofrequency ablation procedures with great precision. Figure legend Panel A and D: the central OCT images of the ablation lesions. The blue arrows indicate the lesion bottom, where the image intensity suddenly decreases. The white arrows indicate the birefringence artifacts (the black bands in the grey regions). Panel B and E: the ground true segmentations of lesions in panel A and D. Panel C and F: the predictions by U-Net model of the lesions in panel A and D. A scale bar representing 500 μm is shown in each panel. Abstract Figure


Author(s):  
Yu Shi Lau ◽  
Li Kuo Tan ◽  
Chow Khuen Chan ◽  
Kok Han Chee ◽  
Yih Miin Liew

Abstract Percutaneous Coronary Intervention (PCI) with stent placement is a treatment effective for coronary artery diseases. Intravascular optical coherence tomography (OCT) with high resolution is used clinically to visualize stent deployment and restenosis, facilitating PCI operation and for complication inspection. Automated stent struts segmentation in OCT images is necessary as each pullback of OCT images could contain thousands of stent struts. In this paper, a deep learning framework is proposed and demonstrated for the automated segmentation of two major clinical stent types: metal stents and bioresorbable vascular scaffolds (BVS). U-Net, the current most prominent deep learning network in biomedical segmentation, was implemented for segmentation with cropped input. The architectures of MobileNetV2 and DenseNet121 were also adapted into U-Net for improvement in speed and accuracy. The results suggested that the proposed automated algorithm’s segmentation performance approaches the level of independent human observers and is feasible for both types of stents despite their distinct appearance. U-Net with DenseNet121 encoder (U-Dense) performed best with Dice’s coefficient of 0.86 for BVS segmentation, and precision/recall of 0.92/0.92 for metal stent segmentation under optimal crop window size of 256.


2020 ◽  
Vol 138 (10) ◽  
pp. 1017 ◽  
Author(s):  
Nihaal Mehta ◽  
Cecilia S. Lee ◽  
Luísa S. M. Mendonça ◽  
Khadija Raza ◽  
Phillip X. Braun ◽  
...  

2017 ◽  
Vol 8 (7) ◽  
pp. 3440 ◽  
Author(s):  
Cecilia S. Lee ◽  
Ariel J. Tyring ◽  
Nicolaas P. Deruyter ◽  
Yue Wu ◽  
Ariel Rokem ◽  
...  

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Jeewoo Yoon ◽  
Jinyoung Han ◽  
Junseo Ko ◽  
Seong Choi ◽  
Ji In Park ◽  
...  

AbstractCentral serous chorioretinopathy (CSC) is the fourth most common retinopathy and can reduce quality of life. CSC is assessed using optical coherence tomography (OCT), but deep learning systems have not been used to classify CSC subtypes. This study aimed to build a deep learning system model to distinguish CSC subtypes using a convolutional neural network (CNN). We enrolled 435 patients with CSC from a single tertiary center between January 2015 and January 2020. Data from spectral domain OCT (SD-OCT) images of the patients were analyzed using a deep CNN. Five-fold cross-validation was employed to evaluate the model’s ability to discriminate acute, non-resolving, inactive, and chronic atrophic CSC. We compared the performances of the proposed model, Resnet-50, Inception-V3, and eight ophthalmologists. Overall, 3209 SD-OCT images were included. The proposed model showed an average cross-validation accuracy of 70.0% (95% confidence interval [CI], 0.676–0.718) and the highest test accuracy was 73.5%. Additional evaluation in an independent set of 104 patients demonstrated the reliable performance of the proposed model (accuracy: 76.8%). Our model could classify CSC subtypes with high accuracy. Thus, automated deep learning systems could be useful in the classification and management of CSC.


2020 ◽  
Vol 9 (2) ◽  
pp. 54
Author(s):  
Yukun Guo ◽  
Tristan T. Hormel ◽  
Honglian Xiong ◽  
Jie Wang ◽  
Thomas S. Hwang ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daniel Duck-Jin Hwang ◽  
Seong Choi ◽  
Junseo Ko ◽  
Jeewoo Yoon ◽  
Ji In Park ◽  
...  

AbstractThis cross-sectional study aimed to build a deep learning model for detecting neovascular age-related macular degeneration (AMD) and to distinguish retinal angiomatous proliferation (RAP) from polypoidal choroidal vasculopathy (PCV) using a convolutional neural network (CNN). Patients from a single tertiary center were enrolled from January 2014 to January 2020. Spectral-domain optical coherence tomography (SD-OCT) images of patients with RAP or PCV and a control group were analyzed with a deep CNN. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were used to evaluate the model’s ability to distinguish RAP from PCV. The performances of the new model, the VGG-16, Resnet-50, Inception, and eight ophthalmologists were compared. A total of 3951 SD-OCT images from 314 participants (229 AMD, 85 normal controls) were analyzed. In distinguishing the PCV and RAP cases, the proposed model showed an accuracy, sensitivity, and specificity of 89.1%, 89.4%, and 88.8%, respectively, with an AUROC of 95.3% (95% CI 0.727–0.852). The proposed model showed better diagnostic performance than VGG-16, Resnet-50, and Inception-V3 and comparable performance with the eight ophthalmologists. The novel model performed well when distinguishing between PCV and RAP. Thus, automated deep learning systems may support ophthalmologists in distinguishing RAP from PCV.


Sign in / Sign up

Export Citation Format

Share Document