Skin Cancer Classification using Transfer Learning

Author(s):  
Hari Kishan Kondaveeti ◽  
Prabhat Edupuganti
Author(s):  
Priscilla Benedetti ◽  
Damiano Perri ◽  
Marco Simonetti ◽  
Osvaldo Gervasi ◽  
Gianluca Reali ◽  
...  

2020 ◽  
Author(s):  
Abhinav Sagar ◽  
J Dheeba

AbstractIn this work, we address the problem of skin cancer classification using convolutional neural networks. A lot of cancer cases early on are misdiagnosed as something else leading to severe consequences including the death of a patient. Also there are cases in which patients have some other problems and doctors think they might have skin cancer. This leads to unnecessary time and money spent for further diagnosis. In this work, we address both of the above problems using deep neural networks and transfer learning architecture. We have used publicly available ISIC databases for both training and testing our model. Our work achieves an accuracy of 0.935, precision of 0.94, recall of 0.77, F1 score of 0.85 and ROC-AUC of 0.861 which is better than the previous state of the art approaches.


Author(s):  
Zinah Mohsin Arkah ◽  
Dalya S. Al-Dulaimi ◽  
Ahlam R. Khekan

<p>Skin cancer is an example of the most dangerous disease. Early diagnosis of skin cancer can save many people’s lives. Manual classification methods are time-consuming and costly. Deep learning has been proposed for the automated classification of skin cancer. Although deep learning showed impressive performance in several medical imaging tasks, it requires a big number of images to achieve a good performance. The skin cancer classification task suffers from providing deep learning with sufficient data due to the expensive annotation process and required experts. One of the most used solutions is transfer learning of pre-trained models of the ImageNet dataset. However, the learned features of pre-trained models are different from skin cancer image features. To end this, we introduce a novel approach of transfer learning by training the pre-trained models of the ImageNet (VGG, GoogleNet, and ResNet50) on a large number of unlabelled skin cancer images, first. We then train them on a small number of labeled skin images. Our experimental results proved that the proposed method is efficient by achieving an accuracy of 84% with ResNet50 when directly trained with a small number of labeled skin and 93.7% when trained with the proposed approach.</p>


Sign in / Sign up

Export Citation Format

Share Document