scholarly journals Skin Lesion Classification Via Combining Deep Learning Features and Clinical Criteria Representations

2018 ◽  
Author(s):  
Xiaoxiao Li ◽  
Junyan Wu ◽  
Hongda Jiang ◽  
Eric Z. Chen ◽  
Xu Dong ◽  
...  

AbstractSkin lesion is a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of skin lesion is extremely challenging manually visualization. Hence, reliable automatic classification of skin lesions is meaningful to improve pathologists’ accuracy and efficiency. In this paper, we proposed a two-stage method to combine deep learning features and clinical criteria representations to address skin lesion automated diagnosis task. Stage 1 - feature encoding: Modified deep convolutional neural networks (CNNs, in this paper, we used Dense201 and Res50) were fine-tuned to extract rich image global features. To avoid hair noisy, we developed a lesion segmentation U-Net to mask out the decisive regions and used the masked image as CNNs inputs. In addition, color features, texture features and morphological features were exacted based on clinical criteria; Stage 2 - features fusion: LightGBM was used to select the salient features and model parameters, predicting diagnosis confidence for each category. The proposed deep learning frameworks were evaluated on the ISIC 2018 dataset. Experimental results show the promising accuracies of our frameworks were achieved.

Author(s):  
Omar Sedqi Kareem ◽  
Adnan Mohsin Abdulazee ◽  
Diyar Qader Zeebaree

Skin cancer is a significant health problem. More than 123,000 new cases per year are recorded. Melanoma is the most popular type of skin cancer, leading to more than 9000 deaths annually in the USA. Skin disease diagnosis is getting difficult due to visual similarities. While Melanoma is the most common form of skin cancer, other pathology types are also fatal. Automatic melanoma screening systems will be useful in identifying those skin cancers more appropriately. Advances in technology and growth in computational capabilities have allowed machine learning and deep learning algorithms to analyze skin lesion images. Deep Convolutional Neural Networks (DCNNs) have achieved more encouraging results, yet faster systems for diagnosing fatal diseases are the need of the hour. This paper presents a survey of techniques for skin cancer detection from images. The paper aims to present a review of existing state-of-the-art and effective models for automatically detecting Melanoma from skin images. The result of classifications and segmentation from the skin lesion images will be processed better using the ensemble deep learning algorithm.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 1048 ◽  
Author(s):  
Muhammad Ather Iqbal Hussain ◽  
Babar Khan ◽  
Zhijie Wang ◽  
Shenyi Ding

The weave pattern (texture) of woven fabric is considered to be an important factor of the design and production of high-quality fabric. Traditionally, the recognition of woven fabric has a lot of challenges due to its manual visual inspection. Moreover, the approaches based on early machine learning algorithms directly depend on handcrafted features, which are time-consuming and error-prone processes. Hence, an automated system is needed for classification of woven fabric to improve productivity. In this paper, we propose a deep learning model based on data augmentation and transfer learning approach for the classification and recognition of woven fabrics. The model uses the residual network (ResNet), where the fabric texture features are extracted and classified automatically in an end-to-end fashion. We evaluated the results of our model using evaluation metrics such as accuracy, balanced accuracy, and F1-score. The experimental results show that the proposed model is robust and achieves state-of-the-art accuracy even when the physical properties of the fabric are changed. We compared our results with other baseline approaches and a pretrained VGGNet deep learning model which showed that the proposed method achieved higher accuracy when rotational orientations in fabric and proper lighting effects were considered.


Diagnostics ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 811
Author(s):  
Muhammad Attique Khan ◽  
Muhammad Sharif ◽  
Tallha Akram ◽  
Robertas Damaševičius ◽  
Rytis Maskeliūnas

Manual diagnosis of skin cancer is time-consuming and expensive; therefore, it is essential to develop automated diagnostics methods with the ability to classify multiclass skin lesions with greater accuracy. We propose a fully automated approach for multiclass skin lesion segmentation and classification by using the most discriminant deep features. First, the input images are initially enhanced using local color-controlled histogram intensity values (LCcHIV). Next, saliency is estimated using a novel Deep Saliency Segmentation method, which uses a custom convolutional neural network (CNN) of ten layers. The generated heat map is converted into a binary image using a thresholding function. Next, the segmented color lesion images are used for feature extraction by a deep pre-trained CNN model. To avoid the curse of dimensionality, we implement an improved moth flame optimization (IMFO) algorithm to select the most discriminant features. The resultant features are fused using a multiset maximum correlation analysis (MMCA) and classified using the Kernel Extreme Learning Machine (KELM) classifier. The segmentation performance of the proposed methodology is analyzed on ISBI 2016, ISBI 2017, ISIC 2018, and PH2 datasets, achieving an accuracy of 95.38%, 95.79%, 92.69%, and 98.70%, respectively. The classification performance is evaluated on the HAM10000 dataset and achieved an accuracy of 90.67%. To prove the effectiveness of the proposed methods, we present a comparison with the state-of-the-art techniques.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 484 ◽  
Author(s):  
Jose-Agustin Almaraz-Damian ◽  
Volodymyr Ponomaryov ◽  
Sergiy Sadovnychiy ◽  
Heydy Castillejos-Fernandez

In this paper, a new Computer-Aided Detection (CAD) system for the detection and classification of dangerous skin lesions (melanoma type) is presented, through a fusion of handcraft features related to the medical algorithm ABCD rule (Asymmetry Borders-Colors-Dermatoscopic Structures) and deep learning features employing Mutual Information (MI) measurements. The steps of a CAD system can be summarized as preprocessing, feature extraction, feature fusion, and classification. During the preprocessing step, a lesion image is enhanced, filtered, and segmented, with the aim to obtain the Region of Interest (ROI); in the next step, the feature extraction is performed. Handcraft features such as shape, color, and texture are used as the representation of the ABCD rule, and deep learning features are extracted using a Convolutional Neural Network (CNN) architecture, which is pre-trained on Imagenet (an ILSVRC Imagenet task). MI measurement is used as a fusion rule, gathering the most important information from both types of features. Finally, at the Classification step, several methods are employed such as Linear Regression (LR), Support Vector Machines (SVMs), and Relevant Vector Machines (RVMs). The designed framework was tested using the ISIC 2018 public dataset. The proposed framework appears to demonstrate an improved performance in comparison with other state-of-the-art methods in terms of the accuracy, specificity, and sensibility obtained in the training and test stages. Additionally, we propose and justify a novel procedure that should be used in adjusting the evaluation metrics for imbalanced datasets that are common for different kinds of skin lesions.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Eliezer Emanuel Bernart ◽  
Maciel Zortea ◽  
Jacob Scharcanski ◽  
Sergio Bampi

<p>This work presents a novel unsupervised method to segment skin<br />lesions in macroscopic images, grouping the pixels into three disjoint<br />categories, namely ’skin lesion’, ’suspicious region’ and ’healthy<br />skin’. These skin region categories are obtained by analyzing the<br />agreement of adaptative thresholds applied to the different skin image<br />color channels. In the sequence we use stochastic texture features<br />to refine the suspicious regions. Our preliminary results are<br />promising, and suggest that skin lesions can be segmented successfully<br />with the proposed approach. Also, ’suspicious regions’<br />are identified correctly, where it is uncertain if they belong to skin<br />lesions or to the surrounding healthy skin.</p>


Author(s):  
E.Yu. Shchetinin ◽  
A.V. Demidova ◽  
D.S. Kulyabov ◽  
L.A. Sevastyanov

In this paper, we propose an approach to solving the problem of recognizing skin lesions, namely melanoma, based on the analysis of dermoscopic images using deep learning methods. For this purpose, the architecture of a deep convolutional neural network was developed, which was applied to the processing of dermoscopic images of various skin lesions contained in the HAM10000 data set. The data under study were preprocessed to eliminate noise, contamination, and change the size and format of images. In addition, since the disease classes are unbalanced, a number of transformations were performed to balance them. The data obtained in this way were divided into two classes: Melanoma and Benign. Computer experiments using the built deep neural network based on the data obtained in this way have shown that the proposed approach provides 94% accuracy on the test sample, which exceeds similar results obtained by other algorithms.


2020 ◽  
Vol 39 (3) ◽  
pp. 169-185
Author(s):  
Omran Salih ◽  
Serestina Viriri

Deep learning techniques such as Deep Convolutional Networks have achieved great success in skin lesion segmentation towards melanoma detection. The performance is however restrained by distinctive and challenging features of skin lesions such as irregular and fuzzy border, noise and artefacts presence and low contrast between lesions. The methods are also restricted with scarcity of annotated lesion images training dataset and limited computing resources. Recent research in convolutional neural network (CNN) has provided a variety of new architectures for deep learning. One interesting new architecture is the local binary convolutional neural network (LBCNN), which can reduce the workload of CNNs and improve the classification accuracy. The proposed framework employs the local binary convolution on U-net architecture instead of the standard convolution in order to reduced-size deep convolutional encoder-decoder network that adopts loss function for robust segmentation. The proposed framework replaced the encoder part in U-net by LBCNN layers. The approach automatically learns and segments complex features of skin lesion images. The encoder stage learns the contextual information by extracting discriminative features while the decoder stage captures the lesion boundaries of the skin images. This addresses the issues with encoder-decoder network producing coarse segmented output with challenging skin lesions appearances such as low contrast between healthy and unhealthy tissues and fine grained variability. It also addresses issues with multi-size, multi-scale and multi-resolution skin lesion images. The deep convolutional network also adopts a reduced-size network with just five levels of encoding-decoding network. This reduces greatly the consumption of computational processing resources. The system was evaluated on publicly available dataset of ISIC and PH2. The proposed system outperforms most of the existing state-of-art.


2020 ◽  
Vol 10 (17) ◽  
pp. 5954
Author(s):  
Edgar Omar Molina-Molina ◽  
Selene Solorza-Calderón ◽  
Josué Álvarez-Borrego

The detection of skin diseases is becoming one of the priority tasks worldwide due to the increasing amount of skin cancer. Computer-aided diagnosis is a helpful tool to help dermatologists in the detection of these kinds of illnesses. This work proposes a computer-aided diagnosis based on 1D fractal signatures of texture-based features combining with deep-learning features using transferred learning based in Densenet-201. This proposal works with three 1D fractal signatures built per color-image. The energy, variance, and entropy of the fractal signatures are used combined with 100 features extracted from Densenet-201 to construct the features vector. Because commonly, the classes in the dataset of skin lesion images are imbalanced, we use the technique of ensemble of classifiers: K-nearest neighbors and two types of support vector machines. The computer-aided diagnosis output was determined based on the linear plurality vote. In this work, we obtained an average accuracy of 97.35%, an average precision of 91.61%, an average sensitivity of 66.45%, and an average specificity of 97.85% in the eight classes’ classification in the International Skin Imaging Collaboration (ISIC) archive-2019.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Sign in / Sign up

Export Citation Format

Share Document