scholarly journals Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data

Cancers ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1590
Author(s):  
Laith Alzubaidi ◽  
Muthana Al-Amidie ◽  
Ahmed Al-Asadi ◽  
Amjad J. Humaidi ◽  
Omran Al-Shamma ◽  
...  

Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.

2021 ◽  
Vol 27 ◽  
Author(s):  
Qi Zhou ◽  
Wenjie Zhu ◽  
Fuchen Li ◽  
Mingqing Yuan ◽  
Linfeng Zheng ◽  
...  

Objective: To verify the ability of the deep learning model in identifying five subtypes and normal images in noncontrast enhancement CT of intracranial hemorrhage. Method: A total of 351 patients (39 patients in the normal group, 312 patients in the intracranial hemorrhage group) performed with intracranial hemorrhage noncontrast enhanced CT were selected, with 2768 images in total (514 images for the normal group, 398 images for the epidural hemorrhage group, 501 images for the subdural hemorrhage group, 497 images for the intraventricular hemorrhage group, 415 images for the cerebral parenchymal hemorrhage group, and 443 images for the subarachnoid hemorrhage group). Based on the diagnostic reports of two radiologists with more than 10 years of experience, the ResNet-18 and DenseNet-121 deep learning models were selected. Transfer learning was used. 80% of the data was used for training models, 10% was used for validating model performance against overfitting, and the last 10% was used for the final evaluation of the model. Assessment indicators included accuracy, sensitivity, specificity, and AUC values. Results: The overall accuracy of ResNet-18 and DenseNet-121 models were 89.64% and 82.5%, respectively. The sensitivity and specificity of identifying five subtypes and normal images were above 0.80. The sensitivity of DenseNet-121 model to recognize intraventricular hemorrhage and cerebral parenchymal hemorrhage was lower than 0.80, 0.73, and 0.76 respectively. The AUC values of the two deep learning models were above 0.9. Conclusion: The deep learning model can accurately identify the five subtypes of intracranial hemorrhage and normal images, and it can be used as a new tool for clinical diagnosis in the future.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 115 ◽  
Author(s):  
Myung Jae Lim ◽  
Da Eun Kim ◽  
Dong Kun Chung ◽  
Hoon Lim ◽  
Young Man Kwon

Breast cancer is a highly contagious disease that has killed many people all over the world. It can be fully recovered from early detection. To enable the early detection of the breast cancer, it is very important to classify accurately whether it is breast cancer or not. Recently, the deep learning approach method on the medical images such as these histopathologic images of the breast cancer is showing higher level of accuracy and efficiency compared to the conventional methods. In this paper, the breast cancer histopathological image that is difficult to be distinguished was analyzed visually. And among the deep learning algorithms, the CNN(Convolutional Neural Network) specialized for the image was used to perform comparative analysis on whether it is breast cancer or not. Among the CNN algorithms, VGG16 and InceptionV3 were used, and transfer learning was used for the effective application of these algorithms.The data used in this paper is breast cancer histopathological image dataset classifying the benign and malignant of BreakHis. In the 2-class classification task, InceptionV3 achieved 98% accuracy. It is expected that this deep learning approach method will support the development of disease diagnosis through medical images.  


2020 ◽  
Vol 12 (10) ◽  
pp. 1581 ◽  
Author(s):  
Daniel Perez ◽  
Kazi Islam ◽  
Victoria Hill ◽  
Richard Zimmerman ◽  
Blake Schaeffer ◽  
...  

Coastal ecosystems are critically affected by seagrass, both economically and ecologically. However, reliable seagrass distribution information is lacking in nearly all parts of the world because of the excessive costs associated with its assessment. In this paper, we develop two deep learning models for automatic seagrass distribution quantification based on 8-band satellite imagery. Specifically, we implemented a deep capsule network (DCN) and a deep convolutional neural network (CNN) to assess seagrass distribution through regression. The DCN model first determines whether seagrass is presented in the image through classification. Second, if seagrass is presented in the image, it quantifies the seagrass through regression. During training, the regression and classification modules are jointly optimized to achieve end-to-end learning. The CNN model is strictly trained for regression in seagrass and non-seagrass patches. In addition, we propose a transfer learning approach to transfer knowledge in the trained deep models at one location to perform seagrass quantification at a different location. We evaluate the proposed methods in three WorldView-2 satellite images taken from the coastal area in Florida. Experimental results show that the proposed deep DCN and CNN models performed similarly and achieved much better results than a linear regression model and a support vector machine. We also demonstrate that using transfer learning techniques for the quantification of seagrass significantly improved the results as compared to directly applying the deep models to new locations.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Veturia Chiroiu ◽  
Ligia Munteanu ◽  
Rodica Ioan ◽  
Ciprian Dragne ◽  
Luciana Majercsik

AbstractThe inverse sonification problem is investigated in this article in order to detect hardly capturing details in a medical image. The direct problem consists in converting the image data into sound signals by a transformation which involves three steps - data, acoustics parameters and sound representations. The inverse problem is reversing back the sound signals into image data. By using the known sonification operator, the inverse approach does not bring any gain in the sonified medical imaging. The replication of the image already known does not help the diagnosis and surgical operation. In order to bring gains in the medical imaging, a new sonification operator is advanced in this paper, by using the Burgers equation of sound propagation. The sonified medical imaging is useful in interpreting the medical imaging that, however powerful they may be, are never good enough to aid tumour surgery. The inverse approach is exercised on several medical images used to surgical operations.


Author(s):  
Fouzia Altaf ◽  
Syed M. S. Islam ◽  
Naeem Khalid Janjua

AbstractDeep learning has provided numerous breakthroughs in natural imaging tasks. However, its successful application to medical images is severely handicapped with the limited amount of annotated training data. Transfer learning is commonly adopted for the medical imaging tasks. However, a large covariant shift between the source domain of natural images and target domain of medical images results in poor transfer learning. Moreover, scarcity of annotated data for the medical imaging tasks causes further problems for effective transfer learning. To address these problems, we develop an augmented ensemble transfer learning technique that leads to significant performance gain over the conventional transfer learning. Our technique uses an ensemble of deep learning models, where the architecture of each network is modified with extra layers to account for dimensionality change between the images of source and target data domains. Moreover, the model is hierarchically tuned to the target domain with augmented training data. Along with the network ensemble, we also utilize an ensemble of dictionaries that are based on features extracted from the augmented models. The dictionary ensemble provides an additional performance boost to our method. We first establish the effectiveness of our technique with the challenging ChestXray-14 radiography data set. Our experimental results show more than 50% reduction in the error rate with our method as compared to the baseline transfer learning technique. We then apply our technique to a recent COVID-19 data set for binary and multi-class classification tasks. Our technique achieves 99.49% accuracy for the binary classification, and 99.24% for multi-class classification.


2021 ◽  
Vol 6 (5) ◽  
pp. 156-167
Author(s):  
Chetanpal Singh

Deep learning has played a potential role in quality healthcare with fast automated and proper medical image analysis. In clinical applications, medical imaging is one of the most important parameters as with the help of this; experts can detect, monitor, and diagnose any kind of problems that are there in the patient's body. However, there are two things that one needs to understand; that is, the implementation of Artificial Neural Networks and Convolutional Neural Networks as well as deep learning to know about medical image analysis. It is necessary to state here that the deep learning approach is gaining attention in the medical imaging field in evaluating the presence or absence of disease in a patient. Mammography images, digital histopathology images, computerized tomography, etc. are some of the areas on which DL implementation focuses. One upon going through the paper will get to know the recent development that has occurred in this field and come up with a critical review on this aspect. The paper has demonstrated in detail modern deep learning models that are implemented in medical image analysis. There is no doubt about the promising future of the deep learning models and according to experts; the implementation of deep learning techniques has outperformed medical experts in numerous tasks. However, deep learning also has some drawbacks and challenges that are required to be addressed like limited datasets and many more. To mitigate such kinds of challenges, researchers are working on this aspect so that they can enhance healthcare by deploying AI.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guangzhou An ◽  
Masahiro Akiba ◽  
Kazuko Omodaka ◽  
Toru Nakazawa ◽  
Hideo Yokota

AbstractDeep learning is being employed in disease detection and classification based on medical images for clinical decision making. It typically requires large amounts of labelled data; however, the sample size of such medical image datasets is generally small. This study proposes a novel training framework for building deep learning models of disease detection and classification with small datasets. Our approach is based on a hierarchical classification method where the healthy/disease information from the first model is effectively utilized to build subsequent models for classifying the disease into its sub-types via a transfer learning method. To improve accuracy, multiple input datasets were used, and a stacking ensembled method was employed for final classification. To demonstrate the method’s performance, a labelled dataset extracted from volumetric ophthalmic optical coherence tomography data for 156 healthy and 798 glaucoma eyes was used, in which glaucoma eyes were further labelled into four sub-types. The average weighted accuracy and Cohen’s kappa for three randomized test datasets were 0.839 and 0.809, respectively. Our approach outperformed the flat classification method by 9.7% using smaller training datasets. The results suggest that the framework can perform accurate classification with a small number of medical images.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5097 ◽  
Author(s):  
Satya P. Singh ◽  
Lipo Wang ◽  
Sukrit Gupta ◽  
Haveesh Goli ◽  
Parasuraman Padmanabhan ◽  
...  

The rapid advancements in machine learning, graphics processing technologies and the availability of medical imaging data have led to a rapid increase in the use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for the analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, we provide a brief mathematical description of 3D CNN and provide the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models in general) and possible future trends in the field.


2022 ◽  
Vol 30 (1) ◽  
pp. 641-654
Author(s):  
Ali Abd Almisreb ◽  
Nooritawati Md Tahir ◽  
Sherzod Turaev ◽  
Mohammed A. Saleh ◽  
Syed Abdul Mutalib Al Junid

Arabic handwriting is slightly different from the handwriting of other languages; hence it is possible to distinguish the handwriting written by the native or non-native writer based on their handwriting. However, classifying Arabic handwriting is challenging using traditional text recognition algorithms. Thus, this study evaluated and validated the utilisation of deep transfer learning models to overcome such issues. Hence, seven types of deep learning transfer models, namely the AlexNet, GoogleNet, ResNet18, ResNet50, ResNet101, VGG16, and VGG19, were used to determine the most suitable model for classifying the handwritten images written by the native or non-native. Two datasets comprised of Arabic handwriting images were used to evaluate and validate the newly developed deep learning models used to classify each model’s output as either native or foreign (non-native) writers. The training and validation sets were conducted using both original and augmented datasets. Results showed that the highest accuracy is using the GoogleNet deep learning model for both normal and augmented datasets, with the highest accuracy attained as 93.2% using normal data and 95.5% using augmented data in classifying the native handwriting.


Sign in / Sign up

Export Citation Format

Share Document