scholarly journals Prediction of Different Eye Diseases Based on Fundus Photography via Deep Transfer Learning

2021 ◽  
Vol 10 (23) ◽  
pp. 5481
Author(s):  
Chen Guo ◽  
Minzhong Yu ◽  
Jing Li

With recent advancements in machine learning, especially in deep learning, the prediction of eye diseases based on fundus photography using deep convolutional neural networks (DCNNs) has attracted great attention. However, studies focusing on identifying the right disease among several candidates, which is a better approximation of clinical diagnosis in practice comparing with the case that aims to distinguish one particular eye disease from normal controls, are limited. The performance of existing algorithms for multi-class classification of fundus images is at most mediocre. Moreover, in many studies consisting of different eye diseases, labeled images are quite limited mainly due to privacy concern of patients. In this case, it is infeasible to train huge DCNNs, which usually have millions of parameters. To address these challenges, we propose to utilize a lightweight deep learning architecture called MobileNetV2 and transfer learning to distinguish four common eye diseases, including Glaucoma, Maculopathy, Pathological Myopia, and Retinitis Pigmentosa, from normal controls using a small training data. We also apply a visualization approach to highlight the loci that are most related to the disease labels to make the model more explainable. The highlighted area chosen by the algorithm itself may give some hints for further fundus image studies. Our experimental results show that our system achieves an average accuracy of 96.2%, sensitivity of 90.4%, and specificity of 97.6% on the test data via five independent runs, and outperforms two other deep learning-based algorithms both in terms of accuracy and efficiency.

2021 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
Yohanna Rodriguez-Ortega ◽  
Dora M. Ballesteros ◽  
Diego Renza

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.


2020 ◽  
Vol 10 (4) ◽  
pp. 213 ◽  
Author(s):  
Ki-Sun Lee ◽  
Jae Young Kim ◽  
Eun-tae Jeon ◽  
Won Suk Choi ◽  
Nan Hee Kim ◽  
...  

According to recent studies, patients with COVID-19 have different feature characteristics on chest X-ray (CXR) than those with other lung diseases. This study aimed at evaluating the layer depths and degree of fine-tuning on transfer learning with a deep convolutional neural network (CNN)-based COVID-19 screening in CXR to identify efficient transfer learning strategies. The CXR images used in this study were collected from publicly available repositories, and the collected images were classified into three classes: COVID-19, pneumonia, and normal. To evaluate the effect of layer depths of the same CNN architecture, CNNs called VGG-16 and VGG-19 were used as backbone networks. Then, each backbone network was trained with different degrees of fine-tuning and comparatively evaluated. The experimental results showed the highest AUC value to be 0.950 concerning COVID-19 classification in the experimental group of a fine-tuned with only 2/5 blocks of the VGG16 backbone network. In conclusion, in the classification of medical images with a limited number of data, a deeper layer depth may not guarantee better results. In addition, even if the same pre-trained CNN architecture is used, an appropriate degree of fine-tuning can help to build an efficient deep learning model.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 256
Author(s):  
Francesco Ponzio ◽  
Gianvito Urgese ◽  
Elisa Ficarra ◽  
Santa Di Cataldo

Thanks to their capability to learn generalizable descriptors directly from images, deep Convolutional Neural Networks (CNNs) seem the ideal solution to most pattern recognition problems. On the other hand, to learn the image representation, CNNs need huge sets of annotated samples that are unfeasible in many every-day scenarios. This is the case, for example, of Computer-Aided Diagnosis (CAD) systems for digital pathology, where additional challenges are posed by the high variability of the cancerous tissue characteristics. In our experiments, state-of-the-art CNNs trained from scratch on histological images were less accurate and less robust to variability than a traditional machine learning framework, highlighting all the issues of fully training deep networks with limited data from real patients. To solve this problem, we designed and compared three transfer learning frameworks, leveraging CNNs pre-trained on non-medical images. This approach obtained very high accuracy, requiring much less computational resource for the training. Our findings demonstrate that transfer learning is a solution to the automated classification of histological samples and solves the problem of designing accurate and computationally-efficient CAD systems with limited training data.


2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Aleksei Grigorev ◽  
Zhihong Tian ◽  
Seungmin Rho ◽  
Jianxin Xiong ◽  
Shaohui Liu ◽  
...  

AbstractThe person re-identification is one of the most significant problems in computer vision and surveillance systems. The recent success of deep convolutional neural networks in image classification has inspired researchers to investigate the application of deep learning to the person re-identification. However, the huge amount of research on this problem considers classical settings, where pedestrians are captured by static surveillance cameras, although there is a growing demand for analyzing images and videos taken by drones. In this paper, we aim at filling this gap and provide insights on the person re-identification from drones. To our knowledge, it is the first attempt to tackle this problem under such constraints. We present the person re-identification dataset, named DRone HIT (DRHIT01), which is collected by using a drone. It contains 101 unique pedestrians, which are annotated with their identities. Each pedestrian has about 500 images. We propose to use a combination of triplet and large-margin Gaussian mixture (L-GM) loss to tackle the drone-based person re-identification problem. The proposed network equipped with multi-branch design, channel group learning, and combination of loss functions is evaluated on the DRHIT01 dataset. Besides, transfer learning from the most popular person re-identification datasets is evaluated. Experiment results demonstrate the importance of transfer learning and show that the proposed model outperforms the classic deep learning approach.


2020 ◽  
Vol 12 (21) ◽  
pp. 3628
Author(s):  
Wei Liang ◽  
Tengfei Zhang ◽  
Wenhui Diao ◽  
Xian Sun ◽  
Liangjin Zhao ◽  
...  

Synthetic Aperture Radar (SAR) target classification is an important branch of SAR image interpretation. The deep learning based SAR target classification algorithms have made remarkable achievements. But the acquisition and annotation of SAR target images are time-consuming and laborious, and it is difficult to obtain sufficient training data in many cases. The insufficient training data can make deep learning based models suffering from over-fitting, which will severely limit their wide application in SAR target classification. Motivated by the above problem, this paper employs transfer-learning to transfer the prior knowledge learned from a simulated SAR dataset to a real SAR dataset. To overcome the sample restriction problem caused by the poor feature discriminability for real SAR data. A simple and effective sample spectral regularization method is proposed, which can regularize the singular values of each SAR image feature to improve the feature discriminability. Based on the proposed regularization method, we design a transfer-learning pipeline to leverage the simulated SAR data as well as acquire better feature discriminability. The experimental results indicate that the proposed method is feasible for the sample restriction problem in SAR target classification. Furthermore, the proposed method can improve the classification accuracy when relatively sufficient training data is available, and it can be plugged into any convolutional neural network (CNN) based SAR classification models.


Author(s):  
Fouzia Altaf ◽  
Syed M. S. Islam ◽  
Naeem Khalid Janjua

AbstractDeep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.


2021 ◽  
Vol 4 ◽  
Author(s):  
Ruqian Hao ◽  
Khashayar Namdar ◽  
Lin Liu ◽  
Farzad Khalvati

Brain tumor is one of the leading causes of cancer-related death globally among children and adults. Precise classification of brain tumor grade (low-grade and high-grade glioma) at an early stage plays a key role in successful prognosis and treatment planning. With recent advances in deep learning, artificial intelligence–enabled brain tumor grading systems can assist radiologists in the interpretation of medical images within seconds. The performance of deep learning techniques is, however, highly depended on the size of the annotated dataset. It is extremely challenging to label a large quantity of medical images, given the complexity and volume of medical data. In this work, we propose a novel transfer learning–based active learning framework to reduce the annotation cost while maintaining stability and robustness of the model performance for brain tumor classification. In this retrospective research, we employed a 2D slice–based approach to train and fine-tune our model on the magnetic resonance imaging (MRI) training dataset of 203 patients and a validation dataset of 66 patients which was used as the baseline. With our proposed method, the model achieved area under receiver operating characteristic (ROC) curve (AUC) of 82.89% on a separate test dataset of 66 patients, which was 2.92% higher than the baseline AUC while saving at least 40% of labeling cost. In order to further examine the robustness of our method, we created a balanced dataset, which underwent the same procedure. The model achieved AUC of 82% compared with AUC of 78.48% for the baseline, which reassures the robustness and stability of our proposed transfer learning augmented with active learning framework while significantly reducing the size of training data.


Symmetry ◽  
2021 ◽  
Vol 13 (8) ◽  
pp. 1497
Author(s):  
Harold Achicanoy ◽  
Deisy Chaves ◽  
Maria Trujillo

Deep learning applications on computer vision involve the use of large-volume and representative data to obtain state-of-the-art results due to the massive number of parameters to optimise in deep models. However, data are limited with asymmetric distributions in industrial applications due to rare cases, legal restrictions, and high image-acquisition costs. Data augmentation based on deep learning generative adversarial networks, such as StyleGAN, has arisen as a way to create training data with symmetric distributions that may improve the generalisation capability of built models. StyleGAN generates highly realistic images in a variety of domains as a data augmentation strategy but requires a large amount of data to build image generators. Thus, transfer learning in conjunction with generative models are used to build models with small datasets. However, there are no reports on the impact of pre-trained generative models, using transfer learning. In this paper, we evaluate a StyleGAN generative model with transfer learning on different application domains—training with paintings, portraits, Pokémon, bedrooms, and cats—to generate target images with different levels of content variability: bean seeds (low variability), faces of subjects between 5 and 19 years old (medium variability), and charcoal (high variability). We used the first version of StyleGAN due to the large number of publicly available pre-trained models. The Fréchet Inception Distance was used for evaluating the quality of synthetic images. We found that StyleGAN with transfer learning produced good quality images, being an alternative for generating realistic synthetic images in the evaluated domains.


2021 ◽  
Author(s):  
Geoffrey F. Schau ◽  
Hassan Ghani ◽  
Erik A. Burlingame ◽  
Guillaume Thibault ◽  
Joe W. Gray ◽  
...  

AbstractAccurate diagnosis of metastatic cancer is essential for prescribing optimal control strategies to halt further spread of metastasizing disease. While pathological inspection aided by immunohistochemistry staining provides a valuable gold standard for clinical diagnostics, deep learning methods have emerged as powerful tools for identifying clinically relevant features of whole slide histology relevant to a tumor’s metastatic origin. Although deep learning models require significant training data to learn effectively, transfer learning paradigms provide mechanisms to circumvent limited training data by first training a model on related data prior to fine-tuning on smaller data sets of interest. In this work we propose a transfer learning approach that trains a convolutional neural network to infer the metastatic origin of tumor tissue from whole slide images of hematoxylin and eosin (H&E) stained tissue sections and illustrate the advantages of pre-training network on whole slide images of primary tumor morphology. We further characterize statistical dissimilarity between primary and metastatic tumors of various indications on patch-level images to highlight limitations of our indication-specific transfer learning approach. Using a primary-to-metastatic transfer learning approach, we achieved mean class-specific areas under receiver operator characteristics curve (AUROC) of 0.779, which outperformed comparable models trained on only images of primary tumor (mean AUROC of 0.691) or trained on only images of metastatic tumor (mean AUROC of 0.675), supporting the use of large scale primary tumor imaging data in developing computer vision models to characterize metastatic origin of tumor lesions.


2021 ◽  
Vol 11 ◽  
Author(s):  
Nam Nhut Phan ◽  
Chi-Cheng Huang ◽  
Ling-Ming Tseng ◽  
Eric Y. Chuang

We proposed a highly versatile two-step transfer learning pipeline for predicting the gene signature defining the intrinsic breast cancer subtypes using unannotated pathological images. Deciphering breast cancer molecular subtypes by deep learning approaches could provide a convenient and efficient method for the diagnosis of breast cancer patients. It could reduce costs associated with transcriptional profiling and subtyping discrepancy between IHC assays and mRNA expression. Four pretrained models such as VGG16, ResNet50, ResNet101, and Xception were trained with our in-house pathological images from breast cancer patient with recurrent status in the first transfer learning step and TCGA-BRCA dataset for the second transfer learning step. Furthermore, we also trained ResNet101 model with weight from ImageNet for comparison to the aforementioned models. The two-step deep learning models showed promising classification results of the four breast cancer intrinsic subtypes with accuracy ranging from 0.68 (ResNet50) to 0.78 (ResNet101) in both validation and testing sets. Additionally, the overall accuracy of slide-wise prediction showed even higher average accuracy of 0.913 with ResNet101 model. The micro- and macro-average area under the curve (AUC) for these models ranged from 0.88 (ResNet50) to 0.94 (ResNet101), whereas ResNet101_imgnet weighted with ImageNet archived an AUC of 0.92. We also show the deep learning model prediction performance is significantly improved relatively to the common Genefu tool for breast cancer classification. Our study demonstrated the capability of deep learning models to classify breast cancer intrinsic subtypes without the region of interest annotation, which will facilitate the clinical applicability of the proposed models.


Sign in / Sign up

Export Citation Format

Share Document