Juxtaposing Deep Learning Models Efficacy for Ocular Disorder Detection of Diabetic Retinopathy for Ophthalmoscopy

Author(s):  
Subhash Arun Dwivedi ◽  
Amit Attry
2019 ◽  
Vol 137 (3) ◽  
pp. 288 ◽  
Author(s):  
Stuart Keel ◽  
Jinrong Wu ◽  
Pei Ying Lee ◽  
Jane Scheetz ◽  
Mingguang He

2018 ◽  
Vol 11 (1) ◽  
pp. 99-106 ◽  
Author(s):  
Suvajit Dutta ◽  
Bonthala CS Manideep ◽  
Syed Muzamil Basha ◽  
Ronnie D. Caytiles ◽  
N. Ch. S. N. Iyengar

2022 ◽  
pp. 1-17
Author(s):  
Saleh Albahli ◽  
Ghulam Nabi Ahmad Hassan Yar

Diabetic retinopathy is an eye deficiency that affects retina as a result of the patient having diabetes mellitus caused by high sugar levels, which may eventually lead to macular edema. The objective of this study is to design and compare several deep learning models that detect severity of diabetic retinopathy, determine risk of leading to macular edema, and segment different types of disease patterns using retina images. Indian Diabetic Retinopathy Image Dataset (IDRiD) dataset was used for disease grading and segmentation. Since images of the dataset have different brightness and contrast, we employed three techniques for generating processed images from the original images, which include brightness, color and, contrast (BCC) enhancing, color jitters (CJ), and contrast limited adaptive histogram equalization (CLAHE). After image preporcessing, we used pre-trained ResNet50, VGG16, and VGG19 models on these different preprocessed images both for determining the severity of the retinopathy and also the chances of macular edema. UNet was also applied to segment different types of diseases. To train and test these models, image dataset was divided into training, testing, and validation data at 70%, 20%, and 10% ratios, respectively. During model training, data augmentation method was also applied to increase the number of training images. Study results show that for detecting the severity of retinopathy and macular edema, ResNet50 showed the best accuracy using BCC and original images with an accuracy of 60.2% and 82.5%, respectively, on validation dataset. In segmenting different types of diseases, UNet yielded the highest testing accuracy of 65.22% and 91.09% for microaneurysms and hard exudates using BCC images, 84.83% for optic disc using CJ images, 59.35% and 89.69% for hemorrhages and soft exudates using CLAHE images, respectively. Thus, image preprocessing can play an important role to improve efficacy and performance of deep learning models.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 948
Author(s):  
Nouar AlDahoul ◽  
Hezerul Abdul Karim ◽  
Myles Joshua Toledo Tan ◽  
Mhd Adel Momo ◽  
Jamie Ledesma Fermin

Diabetes is one of the top ten causes of death among adults worldwide. People with diabetes are prone to suffer from eye disease such as diabetic retinopathy (DR). DR damages the blood vessels in the retina and can result in vision loss. DR grading is an essential step to take to help in the early diagnosis and in the effective treatment thereof, and also to slow down its progression to vision impairment. Existing automatic solutions are mostly based on traditional image processing and machine learning techniques. Hence, there is a big gap when it comes to more generic detection and grading of DR. Various deep learning models such as convolutional neural networks (CNNs) have been previously utilized for this purpose. To enhance DR grading, this paper proposes a novel solution based on an ensemble of state-of-the-art deep learning models called vision transformers. A challenging public DR dataset proposed in a 2015 Kaggle challenge was used for training and evaluation of the proposed method. This dataset includes highly imbalanced data with five levels of severity: No DR, Mild, Moderate, Severe, and Proliferative DR. The experiments conducted showed that the proposed solution outperforms existing methods in terms of precision (47%), recall (45%), F1 score (42%), and Quadratic Weighted Kappa (QWK) (60.2%). Finally, it was able to run with low inference time (1.12 seconds). For this reason, the proposed solution can help examiners grade DR more accurately than manual means.


2021 ◽  
Vol 3 (Special Issue ICEST 1S) ◽  
pp. 67-72
Author(s):  
Nitin Shivsharanr ◽  
Sanjay Ganorkar

2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


2019 ◽  
Author(s):  
Mohammad Rezaei ◽  
Yanjun Li ◽  
Xiaolin Li ◽  
Chenglong Li

<b>Introduction:</b> The ability to discriminate among ligands binding to the same protein target in terms of their relative binding affinity lies at the heart of structure-based drug design. Any improvement in the accuracy and reliability of binding affinity prediction methods decreases the discrepancy between experimental and computational results.<br><b>Objectives:</b> The primary objectives were to find the most relevant features affecting binding affinity prediction, least use of manual feature engineering, and improving the reliability of binding affinity prediction using efficient deep learning models by tuning the model hyperparameters.<br><b>Methods:</b> The binding site of target proteins was represented as a grid box around their bound ligand. Both binary and distance-dependent occupancies were examined for how an atom affects its neighbor voxels in this grid. A combination of different features including ANOLEA, ligand elements, and Arpeggio atom types were used to represent the input. An efficient convolutional neural network (CNN) architecture, DeepAtom, was developed, trained and tested on the PDBbind v2016 dataset. Additionally an extended benchmark dataset was compiled to train and evaluate the models.<br><b>Results: </b>The best DeepAtom model showed an improved accuracy in the binding affinity prediction on PDBbind core subset (Pearson’s R=0.83) and is better than the recent state-of-the-art models in this field. In addition when the DeepAtom model was trained on our proposed benchmark dataset, it yields higher correlation compared to the baseline which confirms the value of our model.<br><b>Conclusions:</b> The promising results for the predicted binding affinities is expected to pave the way for embedding deep learning models in virtual screening and rational drug design fields.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


Author(s):  
Yuejun Liu ◽  
Yifei Xu ◽  
Xiangzheng Meng ◽  
Xuguang Wang ◽  
Tianxu Bai

Background: Medical imaging plays an important role in the diagnosis of thyroid diseases. In the field of machine learning, multiple dimensional deep learning algorithms are widely used in image classification and recognition, and have achieved great success. Objective: The method based on multiple dimensional deep learning is employed for the auxiliary diagnosis of thyroid diseases based on SPECT images. The performances of different deep learning models are evaluated and compared. Methods: Thyroid SPECT images are collected with three types, they are hyperthyroidism, normal and hypothyroidism. In the pre-processing, the region of interest of thyroid is segmented and the amount of data sample is expanded. Four CNN models, including CNN, Inception, VGG16 and RNN, are used to evaluate deep learning methods. Results: Deep learning based methods have good classification performance, the accuracy is 92.9%-96.2%, AUC is 97.8%-99.6%. VGG16 model has the best performance, the accuracy is 96.2% and AUC is 99.6%. Especially, the VGG16 model with a changing learning rate works best. Conclusion: The standard CNN, Inception, VGG16, and RNN four deep learning models are efficient for the classification of thyroid diseases with SPECT images. The accuracy of the assisted diagnostic method based on deep learning is higher than that of other methods reported in the literature.


Sign in / Sign up

Export Citation Format

Share Document