scholarly journals Deep Learning-Based Classification of Inherited Retinal Diseases Using Fundus Autofluorescence

2020 ◽  
Vol 9 (10) ◽  
pp. 3303
Author(s):  
Alexandra Miere ◽  
Thomas Le Meur ◽  
Karen Bitton ◽  
Carlotta Pallone ◽  
Oudy Semoun ◽  
...  

Background. In recent years, deep learning has been increasingly applied to a vast array of ophthalmological diseases. Inherited retinal diseases (IRD) are rare genetic conditions with a distinctive phenotype on fundus autofluorescence imaging (FAF). Our purpose was to automatically classify different IRDs by means of FAF images using a deep learning algorithm. Methods. In this study, FAF images of patients with retinitis pigmentosa (RP), Best disease (BD), Stargardt disease (STGD), as well as a healthy comparable group were used to train a multilayer deep convolutional neural network (CNN) to differentiate FAF images between each type of IRD and normal FAF. The CNN was trained and validated with 389 FAF images. Established augmentation techniques were used. An Adam optimizer was used for training. For subsequent testing, the built classifiers were then tested with 94 untrained FAF images. Results. For the inherited retinal disease classifiers, global accuracy was 0.95. The precision-recall area under the curve (PRC-AUC) averaged 0.988 for BD, 0.999 for RP, 0.996 for STGD, and 0.989 for healthy controls. Conclusions. This study describes the use of a deep learning-based algorithm to automatically detect and classify inherited retinal disease in FAF. Hereby, the created classifiers showed excellent results. With further developments, this model may be a diagnostic tool and may give relevant information for future therapeutic approaches.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Jason Charng ◽  
Di Xiao ◽  
Maryam Mehdizadeh ◽  
Mary S. Attia ◽  
Sukanya Arunachalam ◽  
...  

Abstract Stargardt disease is one of the most common forms of inherited retinal disease and leads to permanent vision loss. A diagnostic feature of the disease is retinal flecks, which appear hyperautofluorescent in fundus autofluorescence (FAF) imaging. The size and number of these flecks increase with disease progression. Manual segmentation of flecks allows monitoring of disease, but is time-consuming. Herein, we have developed and validated a deep learning approach for segmenting these Stargardt flecks (1750 training and 100 validation FAF patches from 37 eyes with Stargardt disease). Testing was done in 10 separate Stargardt FAF images and we observed a good overall agreement between manual and deep learning in both fleck count and fleck area. Longitudinal data were available in both eyes from 6 patients (average total follow-up time 4.2 years), with both manual and deep learning segmentation performed on all (n = 82) images. Both methods detected a similar upward trend in fleck number and area over time. In conclusion, we demonstrated the feasibility of utilizing deep learning to segment and quantify FAF lesions, laying the foundation for future studies using fleck parameters as a trial endpoint.


2021 ◽  
Vol 22 (5) ◽  
pp. 2374
Author(s):  
Laura Kuehlewein ◽  
Ditta Zobor ◽  
Katarina Stingl ◽  
Melanie Kempf ◽  
Fadi Nasser ◽  
...  

In this retrospective, longitudinal, observational cohort study, we investigated the phenotypic and genotypic features of retinitis pigmentosa associated with variants in the PDE6B gene. Patients underwent clinical examination and genetic testing at a single tertiary referral center, including best-corrected visual acuity (BCVA), kinetic visual field (VF), full-field electroretinography, full-field stimulus threshold, spectral domain optical coherence tomography, and fundus autofluorescence imaging. The genetic testing comprised candidate gene sequencing, inherited retinal disease gene panel sequencing, whole-genome sequencing, and testing for familial variants by Sanger sequencing. Twenty-four patients with mutations in PDE6B from 21 families were included in the study (mean age at the first visit: 32.1 ± 13.5 years). The majority of variants were putative splicing defects (8/23) and missense (7/23) mutations. Seventy-nine percent (38/48) of eyes had no visual acuity impairment at the first visit. Visual acuity impairment was mild in 4% (2/48), moderate in 13% (6/48), and severe in 4% (2/48). BCVA was symmetrical in the right and left eyes. The kinetic VF measurements were highly symmetrical in the right and left eyes, as was the horizontal ellipsoid zone (EZ) width. Regarding the genetic findings, 43% of the PDE6B variants found in our patients were novel. Thus, this study contributed substantially to the PDE6B mutation spectrum. The visual acuity impairment was mild in 83% of eyes, providing a window of opportunity for investigational new drugs. The EZ width was reduced in all patients and was highly symmetric between the eyes, making it a promising outcome measure. We expect these findings to have implications on the design of future PDE6B-related retinitis pigmentosa (RP) clinical trials.


2021 ◽  
pp. bjophthalmol-2021-319228
Author(s):  
Malena Daich Varela ◽  
Burak Esener ◽  
Shaima A Hashem ◽  
Thales Antonio Cabral de Guimaraes ◽  
Michalis Georgiou ◽  
...  

Ophthalmic genetics is a field that has been rapidly evolving over the last decade, mainly due to the flourishing of translational medicine for inherited retinal diseases (IRD). In this review, we will address the different methods by which retinal structure can be objectively and accurately assessed in IRD. We review standard-of-care imaging for these patients: colour fundus photography, fundus autofluorescence imaging and optical coherence tomography (OCT), as well as higher-resolution and/or newer technologies including OCT angiography, adaptive optics imaging, fundus imaging using a range of wavelengths, magnetic resonance imaging, laser speckle flowgraphy and retinal oximetry, illustrating their utility using paradigm genotypes with on-going therapeutic efforts/trials.


Genes ◽  
2019 ◽  
Vol 10 (8) ◽  
pp. 557 ◽  
Author(s):  
Siebren Faber ◽  
Ronald Roepman

The light sensing outer segments of photoreceptors (PRs) are renewed every ten days due to their high photoactivity, especially of the cones during daytime vision. This demands a tremendous amount of energy, as well as a high turnover of their main biosynthetic compounds, membranes, and proteins. Therefore, a refined proteostasis network (PN), regulating the protein balance, is crucial for PR viability. In many inherited retinal diseases (IRDs) this balance is disrupted leading to protein accumulation in the inner segment and eventually the death of PRs. Various studies have been focusing on therapeutically targeting the different branches of the PR PN to restore the protein balance and ultimately to treat inherited blindness. This review first describes the different branches of the PN in detail. Subsequently, insights are provided on how therapeutic compounds directed against the different PN branches might slow down or even arrest the appalling, progressive blinding conditions. These insights are supported by findings of PN modulators in other research disciplines.


2021 ◽  
Author(s):  
Ayumi Koyama ◽  
Dai Miyazaki ◽  
Yuji Nakagawa ◽  
Yuji Ayatsuka ◽  
Hitomi Miyake ◽  
...  

Abstract Corneal opacities are an important cause of blindness, and its major etiology is infectious keratitis. Slit-lamp examinations are commonly used to determine the causative pathogen; however, their diagnostic accuracy is low even for experienced ophthalmologists. To characterize the “face” of an infected cornea, we have adapted a deep learning architecture used for facial recognition and applied it to determine a probability score for a specific pathogen causing keratitis. To record the diverse features and mitigate the uncertainty, batches of probability scores of 4 serial images taken from many angles or fluorescence staining were learned for score and decision level fusion using a gradient boosting decision tree. A total of 4306 slit-lamp images and 312 images obtained by internet publications on keratitis by bacteria, fungi, acanthamoeba, and herpes simplex virus (HSV) were studied. The created algorithm had a high overall accuracy of diagnosis, e.g., the accuracy/area under the curve (AUC) for acanthamoeba was 97.9%/0.995, bacteria was 90.7%/0.963, fungi was 95.0%/0.975, and HSV was 92.3%/0.946, by group K-fold validation, and it was robust to even the low resolution web images. We suggest that our hybrid deep learning-based algorithm be used as a simple and accurate method for computer-assisted diagnosis of infectious keratitis.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 250
Author(s):  
Yejin Jeon ◽  
Kyeorye Lee ◽  
Leonard Sunwoo ◽  
Dongjun Choi ◽  
Dong Yul Oh ◽  
...  

Accurate image interpretation of Waters’ and Caldwell view radiographs used for sinusitis screening is challenging. Therefore, we developed a deep learning algorithm for diagnosing frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views. The datasets were selected for the training and validation set (n = 1403, sinusitis% = 34.3%) and the test set (n = 132, sinusitis% = 29.5%) by temporal separation. The algorithm can simultaneously detect and classify each paranasal sinus using both Waters’ and Caldwell views without manual cropping. Single- and multi-view models were compared. Our proposed algorithm satisfactorily diagnosed frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views (area under the curve (AUC), 0.71 (95% confidence interval, 0.62–0.80), 0.78 (0.72–0.85), and 0.88 (0.84–0.92), respectively). The one-sided DeLong’s test was used to compare the AUCs, and the Obuchowski–Rockette model was used to pool the AUCs of the radiologists. The algorithm yielded a higher AUC than radiologists for ethmoid and maxillary sinusitis (p = 0.012 and 0.013, respectively). The multi-view model also exhibited a higher AUC than the single Waters’ view model for maxillary sinusitis (p = 0.038). Therefore, our algorithm showed diagnostic performances comparable to radiologists and enhanced the value of radiography as a first-line imaging modality in assessing multiple sinusitis.


2021 ◽  
Vol 130 ◽  
pp. 104198
Author(s):  
Alexandra Miere ◽  
Vittorio Capuano ◽  
Arthur Kessler ◽  
Olivia Zambrowski ◽  
Camille Jung ◽  
...  

2018 ◽  
Author(s):  
Anisha Keshavan ◽  
Jason D. Yeatman ◽  
Ariel Rokem

AbstractResearch in many fields has become increasingly reliant on large and complex datasets. “Big Data” holds untold promise to rapidly advance science by tackling new questions that cannot be answered with smaller datasets. While powerful, research with Big Data poses unique challenges, as many standard lab protocols rely on experts examining each one of the samples. This is not feasible for large-scale datasets because manual approaches are time-consuming and hence difficult to scale. Meanwhile, automated approaches lack the accuracy of examination by highly trained scientists and this may introduce major errors, sources of noise, and unforeseen biases into these large and complex datasets. Our proposed solution is to 1) start with a small, expertly labelled dataset, 2) amplify labels through web-based tools that engage citizen scientists, and 3) train machine learning on amplified labels to emulate expert decision making. As a proof of concept, we developed a system to quality control a large dataset of three-dimensional magnetic resonance images (MRI) of human brains. An initial dataset of 200 brain images labeled by experts were amplified by citizen scientists to label 722 brains, with over 80,000 ratings done through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on a combination of the citizen scientist labels that accounts for differences in the quality of classification by different citizen scientists. In an ROC analysis (on left out test data), the deep learning network performed as well as a state-of-the-art, specialized algorithm (MRIQC) for quality control of T1-weighted images, each with an area under the curve of 0.99. Finally, as a specific practical application of the method, we explore how brain image quality relates to the replicability of a well established relationship between brain volume and age over development. Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in emerging disciplines where specialized, automated tools do not already exist.


2021 ◽  
Vol 10 (24) ◽  
pp. 5742
Author(s):  
Alexandra Miere ◽  
Olivia Zambrowski ◽  
Arthur Kessler ◽  
Carl-Joe Mehanna ◽  
Carlotta Pallone ◽  
...  

(1) Background: Recessive Stargardt disease (STGD1) and multifocal pattern dystrophy simulating Stargardt disease (“pseudo-Stargardt pattern dystrophy”, PSPD) share phenotypic similitudes, leading to a difficult clinical diagnosis. Our aim was to assess whether a deep learning classifier pretrained on fundus autofluorescence (FAF) images can assist in distinguishing ABCA4-related STGD1 from the PRPH2/RDS-related PSPD and to compare the performance with that of retinal specialists. (2) Methods: We trained a convolutional neural network (CNN) using 729 FAF images from normal patients or patients with inherited retinal diseases (IRDs). Transfer learning was then used to update the weights of a ResNet50V2 used to classify the 370 FAF images into STGD1 and PSPD. Retina specialists evaluated the same dataset. The performance of the CNN and that of retina specialists were compared in terms of accuracy, sensitivity, and precision. (3) Results: The CNN accuracy on the test dataset of 111 images was 0.882. The AUROC was 0.890, the precision was 0.883 and the sensitivity was 0.883. The accuracy for retina experts averaged 0.816, whereas for retina fellows it averaged 0.724. (4) Conclusions: This proof-of-concept study demonstrates that, even with small databases, a pretrained CNN is able to distinguish between STGD1 and PSPD with good accuracy.


Sign in / Sign up

Export Citation Format

Share Document