scholarly journals Self-supervised retinal thickness prediction enables deep learning from unlabeled data to boost classification of diabetic retinopathy

2019 ◽  
Author(s):  
Olle G. Holmberg ◽  
Niklas D. Köhler ◽  
Thiago Martins ◽  
Jakob Siedlecki ◽  
Tina Herold ◽  
...  

AbstractAccess to large, annotated samples represents a considerable challenge for training accurate deep-learning models in medical imaging. While current leading-edge transfer learning from pre-trained models can help with cases lacking data, it limits design choices, and generally results in the use of unnecessarily large models. We propose a novel, self-supervised training scheme for obtaining high-quality, pre-trained networks from unlabeled, cross-modal medical imaging data, which will allow for creating accurate and efficient models. We demonstrate this by accurately predicting optical coherence tomography (OCT)-based retinal thickness measurements from simple infrared (IR) fundus images. Subsequently, learned representations outperformed advanced classifiers on a separate diabetic retinopathy classification task in a scenario of scarce training data. Our cross-modal, three-staged scheme effectively replaced 26,343 diabetic retinopathy annotations with 1,009 semantic segmentations on OCT and reached the same classification accuracy using only 25% of fundus images, without any drawbacks, since OCT is not required for predictions. We expect this concept will also apply to other multimodal clinical data-imaging, health records, and genomics data, and be applicable to corresponding sample-starved learning problems.

2020 ◽  
Vol 2 (11) ◽  
pp. 719-726 ◽  
Author(s):  
Olle G. Holmberg ◽  
Niklas D. Köhler ◽  
Thiago Martins ◽  
Jakob Siedlecki ◽  
Tina Herold ◽  
...  

Author(s):  
Nikos Tsiknakis ◽  
Dimitris Theodoropoulos ◽  
Georgios Manikis ◽  
Emmanouil Ktistakis ◽  
Ourania Boutsora ◽  
...  

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


Author(s):  
Mohammad Shorfuzzaman ◽  
M. Shamim Hossain ◽  
Abdulmotaleb El Saddik

Diabetic retinopathy (DR) is one of the most common causes of vision loss in people who have diabetes for a prolonged period. Convolutional neural networks (CNNs) have become increasingly popular for computer-aided DR diagnosis using retinal fundus images. While these CNNs are highly reliable, their lack of sufficient explainability prevents them from being widely used in medical practice. In this article, we propose a novel explainable deep learning ensemble model where weights from different models are fused into a single model to extract salient features from various retinal lesions found on fundus images. The extracted features are then fed to a custom classifier for the final diagnosis of DR severity level. The model is trained on an APTOS dataset containing retinal fundus images of various DR grades using a cyclical learning rates strategy with an automatic learning rate finder for decaying the learning rate to improve model accuracy. We develop an explainability approach by leveraging gradient-weighted class activation mapping and shapely adaptive explanations to highlight the areas of fundus images that are most indicative of different DR stages. This allows ophthalmologists to view our model's decision in a way that they can understand. Evaluation results using three different datasets (APTOS, MESSIDOR, IDRiD) show the effectiveness of our model, achieving superior classification rates with a high degree of precision (0.970), sensitivity (0.980), and AUC (0.978). We believe that the proposed model, which jointly offers state-of-the-art diagnosis performance and explainability, will address the black-box nature of deep CNN models in robust detection of DR grading.


2021 ◽  
Author(s):  
Pratiksha Shetgaonkar ◽  
Shailendra Aswale ◽  
Saurabh Naik ◽  
Amey Gaonkar ◽  
Swapnil Gawade ◽  
...  

When pancreas fails to secrete sufficient insulin in the human body, the glucose level in blood either becomes too high or too low. This fluctuation in glucose level affects different body organs such as kidney, brain, and eye. When the complications start appearing in the eyes due to Diabetic Mellitus (DM), it is called Diabetic Retinopathy (DR). DR can be categorized in several classes based on the severity, it can be Microaneurysms (ME), Haemorrhages (HE), Hard and Soft Exudates (EX and SE). DR is a slow start process that starts with very mild symptoms, becomes moderate with the time and results in complete vision loss, if not detected on time. Early-stage detection may greatly bolster in vision loss. However, it is impassable to detect the symptoms of DR with naked eyes. Ophthalmologist harbor to the several approaches and algorithm which makes use of different Machine Learning (ML) methods and classifiers to overcome this disease. The burgeoning insistence of Convolutional Neural Network (CNN) and their advancement in extracting features from different fundus images captivate several researchers to strive on it. Transfer Learning (TL) techniques help to use pre-trained CNN on a dataset that has finite training data, especially that in under developing countries. In this work, we propose several CNN architecture along with distinct classifiers which segregate the different lesions (ME and EX) in DR images with very eye-catching accuracies.


2020 ◽  
Vol 5 (1) ◽  
pp. e000569
Author(s):  
Joshua Bridge ◽  
Simon Harding ◽  
Yalin Zheng

ObjectiveTo develop a prognostic tool to predict the progression of age-related eye disease progression using longitudinal colour fundus imaging.Methods and analysisPrevious prognostic models using deep learning with imaging data require annotation during training or only use a single time point. We propose a novel deep learning method to predict the progression of diseases using longitudinal imaging data with uneven time intervals, which requires no prior feature extraction. Given previous images from a patient, our method aims to predict whether the patient will progress onto the next stage of the disease. The proposed method uses InceptionV3 to produce feature vectors for each image. In order to account for uneven intervals, a novel interval scaling is proposed. Finally, a recurrent neural network is used to prognosticate the disease. We demonstrate our method on a longitudinal dataset of colour fundus images from 4903 eyes with age-related macular degeneration (AMD), taken from the Age-Related Eye Disease Study, to predict progression to late AMD.ResultsOur method attains a testing sensitivity of 0.878, a specificity of 0.887 and an area under the receiver operating characteristic of 0.950. We compare our method to previous methods, displaying superior performance in our model. Class activation maps display how the network reaches the final decision.ConclusionThe proposed method can be used to predict progression to advanced AMD at some future visit. Using multiple images at different time points improves predictive performance.


Author(s):  
Alfiya Md. Shaikh

Abstract: Diabetic retinopathy (DR) is a medical condition that damages eye retinal tissues. Diabetic retinopathy leads to mild to complete blindness. It has been a leading cause of global blindness. The identification and categorization of DR take place through the segmentation of parts of the fundus image or the examination of the fundus image for the incidence of exudates, lesions, microaneurysms, and so on. This research aims to study and summarize various recent proposed techniques applied to automate the process of classification of diabetic retinopathy. In the current study, the researchers focused on the concept of classifying the DR fundus images based on their severity level. Emphasis is on studying papers that proposed models developed using transfer learning. Thus, it becomes vital to develop an automatic diagnosis system to support physicians in their work.


Sign in / Sign up

Export Citation Format

Share Document