A Novel Deep Learning Method for Nuclear Cataract Classification Based on Anterior Segment Optical Coherence Tomography Images

Author(s):  
Xiaoqing Zhang ◽  
Zunjie Xiao ◽  
Risa Higashita ◽  
Wan Chen ◽  
Jin Yuan ◽  
...  
2021 ◽  
Vol 11 (12) ◽  
pp. 5488
Author(s):  
Wei Ping Hsia ◽  
Siu Lun Tse ◽  
Chia Jen Chang ◽  
Yu Len Huang

The purpose of this article is to evaluate the accuracy of the optical coherence tomography (OCT) measurement of choroidal thickness in healthy eyes using a deep-learning method with the Mask R-CNN model. Thirty EDI-OCT of thirty patients were enrolled. A mask region-based convolutional neural network (Mask R-CNN) model composed of deep residual network (ResNet) and feature pyramid networks (FPNs) with standard convolution and fully connected heads for mask and box prediction, respectively, was used to automatically depict the choroid layer. The average choroidal thickness and subfoveal choroidal thickness were measured. The results of this study showed that ResNet 50 layers deep (R50) model and ResNet 101 layers deep (R101). R101 U R50 (OR model) demonstrated the best accuracy with an average error of 4.85 pixels and 4.86 pixels, respectively. The R101 ∩ R50 (AND model) took the least time with an average execution time of 4.6 s. Mask-RCNN models showed a good prediction rate of choroidal layer with accuracy rates of 90% and 89.9% for average choroidal thickness and average subfoveal choroidal thickness, respectively. In conclusion, the deep-learning method using the Mask-RCNN model provides a faster and accurate measurement of choroidal thickness. Comparing with manual delineation, it provides better effectiveness, which is feasible for clinical application and larger scale of research on choroid.


2021 ◽  
pp. bjophthalmol-2020-318334
Author(s):  
Wei Wang ◽  
Jiaqing Zhang ◽  
Xiaoxun Gu ◽  
Xiaoting Ruan ◽  
Xiaoyun Chen ◽  
...  

Background/aimsThe primary objective is to quantify the lens nuclear opacity using swept-source anterior segment optical coherence tomography (SS-ASOCT) and to evaluate its correlations with Lens Opacities Classification System III (LOCS-III) system and surgical parameters. The secondary objective is to assess the diagnostic performance for hard nuclear cataract.MethodsThis cross-sectional study included 1222 patients eligible for cataract surgery (1222 eyes). The latest SS-ASOCT (CASIA-2) was used to obtain high-resolution lens images, and the average nuclear density (AND) and maximum nuclear density (MND) were measured by a custom ImageJ software. Spearman’s correlations analysis was used to assess associations of AND/MND with LOCS-III nuclear scores, visual acuity and surgical parameters. The subjects were then split randomly (9:1) into the training dataset and validating dataset. Receiver operating characteristic curves and calibration curves were constructed for the classification on hard nuclear cataract.ResultsThe AND and MND from SS-ASOCT images were significantly correlated with nuclear colour scores (AND: r=0.716; MND: r=0.660; p<0.001) and nuclear opalescence scores (AND: r=0.712; MND: r=0.655; p<0.001). The AND by SS-ASOCT images had the highest values of Spearman’s r for preoperative corrected distance visual acuity (r=0.3131), total ultrasonic time (r=0.3481) and cumulative dissipated energy (r=0.4265). The nuclear density had good performance in classifying hard nuclear cataract, with area under the curves of 0.859 (0.831–0.886) for AND and 0.796 (0.768–0.823) for MND.ConclusionObjective and quantitative evaluation of the lens nuclear density using SS-ASOCT images enable accurate diagnosis of hard nuclear cataract.


2021 ◽  
Vol 10 (1) ◽  
pp. 7
Author(s):  
Boonsong Wanichwecharungruang ◽  
Natsuda Kaothanthong ◽  
Warisara Pattanapongpaiboon ◽  
Pantid Chantangphol ◽  
Kasem Seresirikachorn ◽  
...  

BMJ Open ◽  
2019 ◽  
Vol 9 (9) ◽  
pp. e031313 ◽  
Author(s):  
Kazutaka Kamiya ◽  
Yuji Ayatsuka ◽  
Yudai Kato ◽  
Fusako Fujimura ◽  
Masahide Takahashi ◽  
...  

ObjectiveTo evaluate the diagnostic accuracy of keratoconus using deep learning of the colour-coded maps measured with the swept-source anterior segment optical coherence tomography (AS-OCT).DesignA diagnostic accuracy study.SettingA single-centre study.ParticipantsA total of 304 keratoconic eyes (grade 1 (108 eyes), 2 (75 eyes), 3 (42 eyes) and 4 (79 eyes)) according to the Amsler-Krumeich classification, and 239 age-matched healthy eyes.Main outcome measuresThe diagnostic accuracy of keratoconus using deep learning of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map).ResultsDeep learning of the arithmetical mean output data of these six maps showed an accuracy of 0.991 in discriminating between normal and keratoconic eyes. For single map analysis, posterior elevation map (0.993) showed the highest accuracy, followed by posterior curvature map (0.991), anterior elevation map (0.983), corneal pachymetry map (0.982), total refractive power map (0.978) and anterior curvature map (0.976), in discriminating between normal and keratoconic eyes. This deep learning also showed an accuracy of 0.874 in classifying the stage of the disease. Posterior curvature map (0.869) showed the highest accuracy, followed by corneal pachymetry map (0.845), anterior curvature map (0.836), total refractive power map (0.836), posterior elevation map (0.829) and anterior elevation map (0.820), in classifying the stage.ConclusionsDeep learning using the colour-coded maps obtained by the AS-OCT effectively discriminates keratoconus from normal corneas, and furthermore classifies the grade of the disease. It is suggested that this will become an aid for improving the diagnostic accuracy of keratoconus in daily practice.Clinical trial registration number000034587.


2019 ◽  
Vol 203 ◽  
pp. 37-45 ◽  
Author(s):  
Huazhu Fu ◽  
Mani Baskaran ◽  
Yanwu Xu ◽  
Stephen Lin ◽  
Damon Wing Kee Wong ◽  
...  

2021 ◽  
Author(s):  
Viney Gupta ◽  
Shweta Birla ◽  
Toshit Varshney ◽  
Bindu I Somarajan ◽  
Shikha Gupta ◽  
...  

Abstract Objective: To predict the presence of Angle Dysgenesis on Anterior Segment Optical Coherence Tomography (ADoA) using deep learning and to correlate ADoA with mutations in known glaucoma genes. Design: A cross-sectional observational study. Participants: Eight hundred, high definition anterior segment optical coherence tomography (ASOCT) B-scans were included, out of which 340 images (One scan per eye) were used to build the machine learning (ML) model and the rest were used for validation of ADoA. Out of 340 images, 170 scans included PCG (n=27), JOAG (n=86) and POAG (n=57) eyes and the rest were controls. The genetic validation dataset consisted of another 393 images of patients with known mutations compared with 320 images of healthy controls Methods: ADoA was defined as the absence of Schlemm's canal(SC), the presence of extensive hyper-reflectivity over the region of trabecular meshwork or a hyper-reflective membrane (HM) over the region of the trabecular meshwork. Deep learning was used to classify a given ASOCT image as either having angle dysgenesis or not. ADoA was then specifically looked for, on ASOCT images of patients with mutations in the known genes for glaucoma (MYOC, CYP1B1, FOXC1 and LTBP2). Main Outcome measures: Using Deep learning to identify ADoA in patients with known gene mutations. Results: Our three optimized deep learning models showed an accuracy > 95%, specificity >97% and sensitivity >96% in detecting angle dysgenesis on ASOCT in the internal test dataset. The area under receiver operating characteristic (AUROC) curve, based on the external validation cohort were 0.91 (95% CI, 0.88 to 0.95), 0.80 (95% CI, 0.75 to 0.86) and 0.86 (95% CI, 0.80 to 0.91) for the three models. Amongst the patients with known gene mutations, ADoA was observed among all the patients with MYOC mutations, as it was also observed among those with CYP1B1, FOXC1 and with LTBP2 mutations compared to only 5% of those healthy controls (with no glaucoma mutations). Conclusions: Three deep learning models were developed for a consensus-based outcome to objectively identify ADoA among glaucoma patients. All patients with MYOC mutations had ADoA as predicted by the models.


EP Europace ◽  
2020 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
D Liang ◽  
A Haeberlin

Abstract Background The immediate effect of radiofrequency catheter ablation (RFA) on the tissue is not directly visualized. Optical coherence tomography (OCT) is an imaging technique that uses light to capture histology-like images with a penetration depth of 1-3 mm in the cardiac tissue. There are two specific features of ablation lesions in the OCT images: the disappearance of birefringence artifacts in the lateral and sudden decrease of signal at the bottom (Figure panel A and D). These features can not only be used to recognize the ablation lesions from the OCT images by eye, but also be used to train a machine learning model for automatic lesion segmentation. In recent years, deep learning methods, e.g. convolutional neural networks, have been used in medical image analysis and greatly increased the accuracy of image segmentation. We hypothesize that using a convolutional neural network, e.g. U-Net, can locate and segment the ablation lesions in the OCT images. Purpose To investigate whether a deep learning method such as a convolutional neural network optimized for biomedical image processing, could be used to segment ablation lesions in OCT images automatically. Method 8 OCT datasets with ablation lesions were used for training the convolutional neural network (U-Net model). After training, the model was validated by two new OCT datasets. Dice coefficients were calculated to evaluate spatial overlap between the predictions and the ground truth segmentations, which were manually segmented by the researchers (its value ranges from 0 to 1, and "1" means perfect segmentation). Results The U-Net model could predict the central parts of lesions automatically and accurately (Dice coefficients are 0.933 and 0.934), compared with the ground truth segmentations (Figure panel B and E). These predictions could reveal the depths and diameters of the ablation lesions correctly (Figure panel C and F). Conclusions  Our results showed that deep learning could facilitate ablation lesion identification and segmentation in OCT images. Deep learning methods, integrated in an OCT system, might enable automatic and precise ablation lesion visualization, which may help to assess ablation lesions during radiofrequency ablation procedures with great precision. Figure legend Panel A and D: the central OCT images of the ablation lesions. The blue arrows indicate the lesion bottom, where the image intensity suddenly decreases. The white arrows indicate the birefringence artifacts (the black bands in the grey regions). Panel B and E: the ground true segmentations of lesions in panel A and D. Panel C and F: the predictions by U-Net model of the lesions in panel A and D. A scale bar representing 500 μm is shown in each panel. Abstract Figure


Sign in / Sign up

Export Citation Format

Share Document