scholarly journals Detection of smoking status from retinal images; a Convolutional Neural Network study

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Ehsan Vaghefi ◽  
Song Yang ◽  
Sophie Hill ◽  
Gayl Humphrey ◽  
Natalie Walker ◽  
...  
Author(s):  
Oguz Akbilgic ◽  
Liam Butler ◽  
Ibrahim Karabayir ◽  
Patricia P Chang ◽  
Dalane W Kitzman ◽  
...  

Abstract Aims Heart failure (HF) is a leading cause of death. Early intervention is the key to reduce HF-related morbidity and mortality. This study assesses the utility of electrocardiograms (ECGs) in HF risk prediction. Methods and results Data from the baseline visits (1987–89) of the Atherosclerosis Risk in Communities (ARIC) study was used. Incident hospitalized HF events were ascertained by ICD codes. Participants with good quality baseline ECGs were included. Participants with prevalent HF were excluded. ECG-artificial intelligence (AI) model to predict HF was created as a deep residual convolutional neural network (CNN) utilizing standard 12-lead ECG. The area under the receiver operating characteristic curve (AUC) was used to evaluate prediction models including (CNN), light gradient boosting machines (LGBM), and Cox proportional hazards regression. A total of 14 613 (45% male, 73% of white, mean age ± standard deviation of 54 ± 5) participants were eligible. A total of 803 (5.5%) participants developed HF within 10 years from baseline. Convolutional neural network utilizing solely ECG achieved an AUC of 0.756 (0.717–0.795) on the hold-out test data. ARIC and Framingham Heart Study (FHS) HF risk calculators yielded AUC of 0.802 (0.750–0.850) and 0.780 (0.740–0.830). The highest AUC of 0.818 (0.778–0.859) was obtained when ECG-AI model output, age, gender, race, body mass index, smoking status, prevalent coronary heart disease, diabetes mellitus, systolic blood pressure, and heart rate were used as predictors of HF within LGBM. The ECG-AI model output was the most important predictor of HF. Conclusions ECG-AI model based solely on information extracted from ECG independently predicts HF with accuracy comparable to existing FHS and ARIC risk calculators.


Author(s):  
Noha A. El‐Hag ◽  
Ahmed Sedik ◽  
Walid El‐Shafai ◽  
Heba M. El‐Hoseny ◽  
Ashraf A. M. Khalaf ◽  
...  

Author(s):  
Aidan Lochbihler

Diabetic retinopathy (DR), is a complication with diabetes caused by damaged blood vessels in the back of the retina. DR affects 126.6 million people around the world and is the leading cause of blindness. Hard exudates are a type of lesion caused by the damaged blood vessels and are an early marker for DR. In this research, a fully automatic deep learning method has been developed that is able to delineate hard exudate lesions in retinal images. This allows the lesion volume to be calculated and thus determine DR severity. This technology would remove the need for doctors in the diagnosis process, therefore making the diagnosis faster and more accessible to people around the world. Our dataset consisted of 58 images and was used to train a fully convolutional neural network with a U-net architecture. The U-net consists of a contracting path followed by a symmetric expansive path that was used to learn features of the images. These features were then used to differentiate hard exudates from regular tissue allowing them to be segmented. After creating the model 26 images were used for testing. Results of the U-net model showed a Dice similarity coefficient of 67.23 ± 13.60%, a specificity of 99.74 ± 0.25%, and precision of 75.87 ± 18.14% when comparing the algorithm generated images to the manually segmented ground truths. These results show that the model is precisely delineating the hard exudates and therefore is a viable way to diagnose the severity of DR. 


2021 ◽  
pp. 221-227
Author(s):  
Asif Mohammad ◽  
Mahruf Zaman Utso ◽  
Shifat Bin Habib ◽  
Amit Kumar Das

Neural networks in image processing are becoming a more crucial and integral part of machine learning as computational technology and hardware systems are advanced. Deep learning is also getting attention from the medical sector as it is a prominent process for classifying diseases.  There is a lot of research to predict retinal diseases using deep learning algorithms like Convolutional Neural Network (CNN). Still, there are not many researches for predicting diseases like CNV which stands for choroidal neovascularization, DME, which stands for Diabetic Macular Edema; and DRUSEN. In our research paper, the CNN (Convolutional Neural Networks) algorithm labeled the dataset of OCT retinal images into four types: CNV, DME, DRUSEN, and Natural Retina. We have also done several preprocessing on the images before passing these to the neural network. We have implemented different models for our algorithm where individual models have different hidden layers.  At the end of our following research, we have found that our algorithm CNN generates 93% accuracy.


2020 ◽  
pp. bjophthalmol-2020-317659
Author(s):  
C. Ellis Wisely ◽  
Dong Wang ◽  
Ricardo Henao ◽  
Dilraj S. Grewal ◽  
Atalie C. Thompson ◽  
...  

Background/AimsTo develop a convolutional neural network (CNN) to detect symptomatic Alzheimer’s disease (AD) using a combination of multimodal retinal images and patient data.MethodsColour maps of ganglion cell-inner plexiform layer (GC-IPL) thickness, superficial capillary plexus (SCP) optical coherence tomography angiography (OCTA) images, and ultra-widefield (UWF) colour and fundus autofluorescence (FAF) scanning laser ophthalmoscopy images were captured in individuals with AD or healthy cognition. A CNN to predict AD diagnosis was developed using multimodal retinal images, OCT and OCTA quantitative data, and patient data.Results284 eyes of 159 subjects (222 eyes from 123 cognitively healthy subjects and 62 eyes from 36 subjects with AD) were used to develop the model. Area under the receiving operating characteristic curve (AUC) values for predicted probability of AD for the independent test set varied by input used: UWF colour AUC 0.450 (95% CI 0.282, 0.592), OCTA SCP 0.582 (95% CI 0.440, 0.724), UWF FAF 0.618 (95% CI 0.462, 0.773), GC-IPL maps 0.809 (95% CI 0.700, 0.919). A model incorporating all images, quantitative data and patient data (AUC 0.836 (CI 0.729, 0.943)) performed similarly to models only incorporating all images (AUC 0.829 (95% CI 0.719, 0.939)). GC-IPL maps, quantitative data and patient data AUC 0.841 (95% CI 0.739, 0.943).ConclusionOur CNN used multimodal retinal images to successfully predict diagnosis of symptomatic AD in an independent test set. GC-IPL maps were the most useful single inputs for prediction. Models including only images performed similarly to models also including quantitative data and patient data.


Author(s):  
Jarmila Pavlovicova ◽  
Slavomir Kajan ◽  
Martin Marko ◽  
Milos Oravec ◽  
Veronika Kurilova

2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

Sign in / Sign up

Export Citation Format

Share Document