Assessing Glaucoma Progression Using Machine Learning Trained on Longitudinal Visual Field and Clinical Data

Ophthalmology ◽  
2020 ◽  
Author(s):  
Avyuk Dixit ◽  
Jithin Yohannan ◽  
Michael V. Boland
PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249856
Author(s):  
Scott R. Shuldiner ◽  
Michael V. Boland ◽  
Pradeep Y. Ramulu ◽  
C. Gustavo De Moraes ◽  
Tobias Elze ◽  
...  

Objective To assess whether machine learning algorithms (MLA) can predict eyes that will undergo rapid glaucoma progression based on an initial visual field (VF) test. Design Retrospective analysis of longitudinal data. Subjects 175,786 VFs (22,925 initial VFs) from 14,217 patients who completed ≥5 reliable VFs at academic glaucoma centers were included. Methods Summary measures and reliability metrics from the initial VF and age were used to train MLA designed to predict the likelihood of rapid progression. Additionally, the neural network model was trained with point-wise threshold data in addition to summary measures, reliability metrics and age. 80% of eyes were used for a training set and 20% were used as a test set. MLA test set performance was assessed using the area under the receiver operating curve (AUC). Performance of models trained on initial VF data alone was compared to performance of models trained on data from the first two VFs. Main outcome measures Accuracy in predicting future rapid progression defined as MD worsening more than 1 dB/year. Results 1,968 eyes (8.6%) underwent rapid progression. The support vector machine model (AUC 0.72 [95% CI 0.70–0.75]) most accurately predicted rapid progression when trained on initial VF data. Artificial neural network, random forest, logistic regression and naïve Bayes classifiers produced AUC of 0.72, 0.70, 0.69, 0.68 respectively. Models trained on data from the first two VFs performed no better than top models trained on the initial VF alone. Based on the odds ratio (OR) from logistic regression and variable importance plots from the random forest model, older age (OR: 1.41 per 10 year increment [95% CI: 1.34 to 1.08]) and higher pattern standard deviation (OR: 1.31 per 5-dB increment [95% CI: 1.18 to 1.46]) were the variables in the initial VF most strongly associated with rapid progression. Conclusions MLA can be used to predict eyes at risk for rapid progression with modest accuracy based on an initial VF test. Incorporating additional clinical data to the current model may offer opportunities to predict patients most likely to rapidly progress with even greater accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alexandru Lavric ◽  
Valentin Popa ◽  
Hidenori Takahashi ◽  
Rossen M. Hazarbassanov ◽  
Siamak Yousefi

AbstractThe main goal of this study is to identify the association between corneal shape, elevation, and thickness parameters and visual field damage using machine learning. A total of 676 eyes from 568 patients from the Jichi Medical University in Japan were included in this study. Corneal topography, pachymetry, and elevation images were obtained using anterior segment optical coherence tomography (OCT) and visual field tests were collected using standard automated perimetry with 24-2 Swedish Interactive Threshold Algorithm. The association between corneal structural parameters and visual field damage was investigated using machine learning and evaluated through tenfold cross-validation of the area under the receiver operating characteristic curves (AUC). The average mean deviation was − 8.0 dB and the average central corneal thickness (CCT) was 513.1 µm. Using ensemble machine learning bagged trees classifiers, we detected visual field abnormality from corneal parameters with an AUC of 0.83. Using a tree-based machine learning classifier, we detected four visual field severity levels from corneal parameters with an AUC of 0.74. Although CCT and corneal hysteresis have long been accepted as predictors of glaucoma development and future visual field loss, corneal shape and elevation parameters may also predict glaucoma-induced visual functional loss.


A large volume of datasets is available in various fields that are stored to be somewhere which is called big data. Big Data healthcare has clinical data set of every patient records in huge amount and they are maintained by Electronic Health Records (EHR). More than 80 % of clinical data is the unstructured format and reposit in hundreds of forms. The challenges and demand for data storage, analysis is to handling large datasets in terms of efficiency and scalability. Hadoop Map reduces framework uses big data to store and operate any kinds of data speedily. It is not solely meant for storage system however conjointly a platform for information storage moreover as processing. It is scalable and fault-tolerant to the systems. Also, the prediction of the data sets is handled by machine learning algorithm. This work focuses on the Extreme Machine Learning algorithm (ELM) that can utilize the optimized way of finding a solution to find disease risk prediction by combining ELM with Cuckoo Search optimization-based Support Vector Machine (CS-SVM). The proposed work also considers the scalability and accuracy of big data models, thus the proposed algorithm greatly achieves the computing work and got good results in performance of both veracity and efficiency.


2022 ◽  
Vol 226 (1) ◽  
pp. S362-S363
Author(s):  
Matthew Hoffman ◽  
Wei Liu ◽  
Jade Tunguhan ◽  
Ghamar Bitar ◽  
Kaveeta Kumar ◽  
...  

2016 ◽  
Vol 17 (S15) ◽  
Author(s):  
Animesh Acharjee ◽  
Zsuzsanna Ament ◽  
James A. West ◽  
Elizabeth Stanley ◽  
Julian L. Griffin

2020 ◽  
pp. 799-810
Author(s):  
Matthew Nagy ◽  
Nathan Radakovich ◽  
Aziz Nazha

The volume and complexity of scientific and clinical data in oncology have grown markedly over recent years, including but not limited to the realms of electronic health data, radiographic and histologic data, and genomics. This growth holds promise for a deeper understanding of malignancy and, accordingly, more personalized and effective oncologic care. Such goals require, however, the development of new methods to fully make use of the wealth of available data. Improvements in computer processing power and algorithm development have positioned machine learning, a branch of artificial intelligence, to play a prominent role in oncology research and practice. This review provides an overview of the basics of machine learning and highlights current progress and challenges in applying this technology to cancer diagnosis, prognosis, and treatment recommendations, including a discussion of current takeaways for clinicians.


Eye ◽  
2014 ◽  
Vol 28 (8) ◽  
pp. 974-979 ◽  
Author(s):  
J F Kirwan ◽  
A Hustler ◽  
H Bobat ◽  
L Toms ◽  
D P Crabb ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document