scholarly journals A review of transfer learning for medical image classification

Author(s):  
Hee E. Kim ◽  
Alejandro Cosa-Linan ◽  
Mate E. Maros ◽  
Nandhini Santhanam ◽  
Mahboubeh Jannesari ◽  
...  

Abstract This review paper provides an overview of the peer-reviewed articles using transfer learning for medical image analysis, while also providing guidelines for selecting a convolutional neural network model and its configurations for the image classification task. The data characteristics and the trend of models and transfer learning types in the medical domain are additionally analyzed. Publications were retrieved from the databases PubMed and Web of Science of peer-reviewed articles published in English until December 31, 2020. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. With respect to the model, the majority of studies (n = 57) empirically evaluated numerous models followed by deep (n = 33) and shallow (n = 24) models. With respect to the transfer learning approaches, the majority of studies (n = 46) empirically searched for the optimal transfer learning configuration followed by feature extractor (n = 38) and fine-tuning scratch (n = 27), feature extractor hybrid (n = 7) and fine-tuning (n = 3). The investigated studies showed that transfer learning demonstrates either a better or at least a similar performance compared to medical experts despite the limited data sets. We hence encourage data scientists and practitioners to use models such as ResNet or Inception with a feature extractor approach, which saves computational costs and time without degrading the predictive power.

2021 ◽  
Author(s):  
Akinori Minagi ◽  
Hokuto Hirano ◽  
Kazuhiro Takemoto

Abstract Transfer learning from natural images is well used in deep neural networks (DNNs) for medical image classification to achieve computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training data — which are often required for adversarial attacks — are generally unavailable in terms of security and privacy preservation. Nevertheless, we hypothesized that adversarial attacks are also possible using natural images because pre-trained models do not change significantly after fine-tuning. We focused on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigated whether medical DNN models with transfer learning are vulnerable to universal adversarial perturbations (UAPs), generated using natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls, although slightly lower than that of UAPs from training images. Vulnerability to UAPs from natural images was observed between different natural image datasets and between different model architectures. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization (without transfer learning) reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs from natural images will become a remarkable security threat.


2020 ◽  
Vol 10 (10) ◽  
pp. 3359 ◽  
Author(s):  
Ibrahem Kandel ◽  
Mauro Castelli

Accurate classification of medical images is of great importance for correct disease diagnosis. The automation of medical image classification is of great necessity because it can provide a second opinion or even a better classification in case of a shortage of experienced medical staff. Convolutional neural networks (CNN) were introduced to improve the image classification domain by eliminating the need to manually select which features to use to classify images. Training CNN from scratch requires very large annotated datasets that are scarce in the medical field. Transfer learning of CNN weights from another large non-medical dataset can help overcome the problem of medical image scarcity. Transfer learning consists of fine-tuning CNN layers to suit the new dataset. The main questions when using transfer learning are how deeply to fine-tune the network and what difference in generalization that will make. In this paper, all of the experiments were done on two histopathology datasets using three state-of-the-art architectures to systematically study the effect of block-wise fine-tuning of CNN. Results show that fine-tuning the entire network is not always the best option; especially for shallow networks, alternatively fine-tuning the top blocks can save both time and computational power and produce more robust classifiers.


2021 ◽  
Author(s):  
Akinori Minagi ◽  
Hokuto Hirano ◽  
Kazuhiro Takemoto

Abstract Background. Transfer learning from natural images is well used in deep neural networks (DNNs) for medical image classification to achieve computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training data — which are often required for adversarial attacks — are generally unavailable in terms of security and privacy preservation. Nevertheless, we hypothesized that adversarial attacks are also possible using natural images because pre-trained models do not change significantly after fine-tuning.Methods. We considered three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) to investigate whether medical DNN models with transfer learning are vulnerable to universal adversarial perturbations (UAPs), generated using natural images.Results. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls, although slightly lower than that of UAPs from training images. Vulnerability to UAPs from natural images was observed between different natural image datasets and between different model architectures.Conclusion. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization (without transfer learning) reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs from natural images will become a remarkable security threat.


Author(s):  
Sanket Singh ◽  
Sarthak Jain ◽  
Akshit Khanna ◽  
Anupam Kumar ◽  
Ashish Sharma

Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2021 ◽  
Author(s):  
Quoc-Huy Trinh ◽  
Minh-Van Nguyen

We propose a method that configures Fine-tuning to a combination of backbone DenseNet and ResNet to classify eight classes showing anatomical landmarks, pathological findings, to endoscopic procedures in the GI tract. Our Technique depends on Transfer Learning which combines two backbones, DenseNet 121 and ResNet 101, to improve the performance of Feature Extraction for classifying the target class. After experiment and evaluating our work, we get accuracy with an F1 score of approximately 0.93 while training 80000 and test 4000 images.


Sign in / Sign up

Export Citation Format

Share Document