scholarly journals A Deep Learning-Based Transfer Learning Framework for the Early Detection and Classification of Dermoscopic Images of Melanoma

2021 ◽  
Vol 14 (3) ◽  
pp. 1231-1247
Author(s):  
Lokesh Singh ◽  
Rekh Ram Janghel ◽  
Satya Prakash Sahu

Purpose:Less contrast between lesions and skin, blurriness, darkened lesion images, presence of bubbles, hairs are the artifactsmakes the issue challenging in timely and accurate diagnosis of melanoma. In addition, huge similarity amid nevus lesions and melanoma pose complexity in investigating the melanoma even for the expert dermatologists. Method: In this work, a computer-aided diagnosis for melanoma detection (CAD-MD) system is designed and evaluated for the early and accurate detection of melanoma using thepotentials of machine, and deep learning-based transfer learning for the classification of pigmented skin lesions. The designed CAD-MD comprises of preprocessing, segmentation, feature extraction and classification. Experiments are conducted on dermoscopic images of PH2 and ISIC 2016 publicly available datasets using machine learning and deep learning-based transfer leaning models in twofold: first, with actual images, second, with augmented images. Results:Optimal results are obtained on augmented lesion images using machine learning and deep learning models on PH2 and ISIC-16 dataset. The performance of the CAD-MD system is evaluated using accuracy, sensitivity, specificity, dice coefficient, and jacquard Index. Conclusion:Empirical results show that using the potentials of deep learning-based transfer learning model VGG-16 has significantly outperformed all employed models with an accuracy of 99.1% on the PH2 dataset.

Author(s):  
Jin Li ◽  
Peng Wang ◽  
Yang Zhou ◽  
Hong Liang ◽  
Kuan Luan

The classification of colorectal cancer (CRC) lymph node metastasis (LNM) is a vital clinical issue related to recurrence and design of treatment plans. However, it remains unclear which method is effective in automatically classifying CRC LNM. Hence, this study compared the performance of existing classification methods, i.e., machine learning, deep learning, and deep transfer learning, to identify the most effective method. A total of 3,364 samples (1,646 positive and 1,718 negative) from Harbin Medical University Cancer Hospital were collected. All patches were manually segmented by experienced radiologists, and the image size was based on the lesion to be intercepted. Two classes of global features and one class of local features were extracted from the patches. These features were used in eight machine learning algorithms, while the other models used raw data. Experiment results showed that deep transfer learning was the most effective method with an accuracy of 0.7583 and an area under the curve of 0.7941. Furthermore, to improve the interpretability of the results from the deep learning and deep transfer learning models, the classification heat-map features were used, which displayed the region of feature extraction by superposing with raw data. The research findings are expected to promote the use of effective methods in CRC LNM detection and hence facilitate the design of proper treatment plans.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Joanna Jaworek-Korjakowska ◽  
Paweł Kłeczek

Background. Given its propensity to metastasize, and lack of effective therapies for most patients with advanced disease, early detection of melanoma is a clinical imperative. Different computer-aided diagnosis (CAD) systems have been proposed to increase the specificity and sensitivity of melanoma detection. Although such computer programs are developed for different diagnostic algorithms, to the best of our knowledge, a system to classify different melanocytic lesions has not been proposed yet.Method. In this research we present a new approach to the classification of melanocytic lesions. This work is focused not only on categorization of skin lesions as benign or malignant but also on specifying the exact type of a skin lesion including melanoma, Clark nevus, Spitz/Reed nevus, and blue nevus. The proposed automatic algorithm contains the following steps: image enhancement, lesion segmentation, feature extraction, and selection as well as classification.Results. The algorithm has been tested on 300 dermoscopic images and achieved accuracy of 92% indicating that the proposed approach classified most of the melanocytic lesions correctly.Conclusions. A proposed system can not only help to precisely diagnose the type of the skin mole but also decrease the amount of biopsies and reduce the morbidity related to skin lesion excision.


Author(s):  
Kasikrit Damkliang ◽  
Thakerng Wongsirichot ◽  
Paramee Thongsuksai

Since the introduction of image pattern recognition and computer vision processing, the classification of cancer tissues has been a challenge at pixel-level, slide-level, and patient-level. Conventional machine learning techniques have given way to Deep Learning (DL), a contemporary, state-of-the-art approach to texture classification and localization of cancer tissues. Colorectal Cancer (CRC) is the third ranked cause of death from cancer worldwide. This paper proposes image-level texture classification of a CRC dataset by deep convolutional neural networks (CNN). Simple DL techniques consisting of transfer learning and fine-tuning were exploited. VGG-16, a Keras pre-trained model with initial weights by ImageNet, was applied. The transfer learning architecture and methods responding to VGG-16 are proposed. The training, validation, and testing sets included 5000 images of 150 × 150 pixels. The application set for detection and localization contained 10 large original images of 5000 × 5000 pixels. The model achieved F1-score and accuracy of 0.96 and 0.99, respectively, and produced a false positive rate of 0.01. AUC-based evaluation was also measured. The model classified ten large previously unseen images from the application set represented in false color maps. The reported results show the satisfactory performance of the model. The simplicity of the architecture, configuration, and implementation also contributes to the outcome this work.


Author(s):  
Aditya Khamparia ◽  
Prakash Kumar Singh ◽  
Poonam Rani ◽  
Debabrata Samanta ◽  
Ashish Khanna ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Chenxi Yang ◽  
Banish D. Ojha ◽  
Nicole D. Aranoff ◽  
Philip Green ◽  
Negar Tavassolian

Abstract This paper introduces a study on the classification of aortic stenosis (AS) based on cardio-mechanical signals collected using non-invasive wearable inertial sensors. Measurements were taken from 21 AS patients and 13 non-AS subjects. A feature analysis framework utilizing Elastic Net was implemented to reduce the features generated by continuous wavelet transform (CWT). Performance comparisons were conducted among several machine learning (ML) algorithms, including decision tree, random forest, multi-layer perceptron neural network, and extreme gradient boosting. In addition, a two-dimensional convolutional neural network (2D-CNN) was developed using the CWT coefficients as images. The 2D-CNN was made with a custom-built architecture and a CNN based on Mobile Net via transfer learning. After the reduction of features by 95.47%, the results obtained report 0.87 on accuracy by decision tree, 0.96 by random forest, 0.91 by simple neural network, and 0.95 by XGBoost. Via the 2D-CNN framework, the transfer learning of Mobile Net shows an accuracy of 0.91, while the custom-constructed classifier reveals an accuracy of 0.89. Our results validate the effectiveness of the feature selection and classification framework. They also show a promising potential for the implementation of deep learning tools on the classification of AS.


2021 ◽  
Vol 3 (1) ◽  
pp. 206-213
Author(s):  
Qixiang Luo ◽  
Elizabeth A. Holm ◽  
Chen Wang

A machine learning framework was developed to classify complex carbon nanostructures from TEM images.


2021 ◽  
Author(s):  
Kemal Üreten ◽  
Yüksel Maraş ◽  
Semra Duran ◽  
Kevser Gök

Abstract Objectives The aim of this study is to develop a computer-aided diagnosis method to assist physicians in evaluating sacroiliac radiographs. Methods Convolutional neural networks, a deep learning method, were used in this retrospective study. Transfer learning was implemented with pre-trained VGG-16, ResNet-101 and Inception-v3 networks. Normal pelvic radiographs (n = 290) and pelvic radiographs with sacroiliitis (n = 295) were used for the training of networks. Results The training results were evaluated with the criteria of accuracy, sensitivity, specificity and precision calculated from the confusion matrix and AUC (Area under the ROC curve) calculated from ROC (receiver operating characteristic) curve. Pre-trained VGG-16 model revealed accuracy, sensitivity, specificity, precision and AUC figures of 89.9%, 90.9%, 88.9%, 88.9% and 0.96 with test images, respectively. These results were 84.3%, 91.9%, 78.8%, 75.6 and 0.92 with pre-trained ResNet-101, and 82.0%, 79.6%, 85.0%, 86.7% and 0.90 with pre-trained inception-v3, respectively. Conclusions Successful results were obtained with all three models in this study where transfer learning was applied with pre-trained VGG-16, ResNet-101 and Inception-v3 networks. This method can assist clinicians in the diagnosis of sacroiliitis, provide them with a second objective interpretation, and also reduce the need for advanced imaging methods such as magnetic resonance imaging (MRI).


Sign in / Sign up

Export Citation Format

Share Document