Skin Cancer Classification Through Quantized Color Features and Generative Adversarial Network

2021 ◽  
Vol 12 (3) ◽  
pp. 75-97
Author(s):  
Ananjan Maiti ◽  
Biswajoy Chatterjee ◽  
K. C. Santosh

Early interpretation of skin cancer through computer-aided diagnosis (CAD) tools reduced the intricacy of the treatments as it can attain a 95% recovery rate. To frame up with computer-aided diagnosis system, scientists adopted various artificial intelligence (AI) designed to receive the best classifiers among these diverse features. This investigation covers traditional color-based texture, shape, and statistical features of melanoma skin lesion and contrasted with suggested methods and approaches. The quantized color feature set of 4992 traits were pre-processed before training the model. The experimental images have combined images of naevus (1500), melanoma (1000), and basal cell carcinoma (500). The proposed methods handled issues like class imbalanced with generative adversarial networks (GAN). The recommended color quantization method with synthetic data generation increased the accuracy of the popular machine learning models as it gives an accuracy of 97.08% in random forest. The proposed model preserves a decent accuracy with KNN, adaboost, and gradient boosting.

2021 ◽  
Vol 11 (2) ◽  
pp. 760
Author(s):  
Yun-ji Kim ◽  
Hyun Chin Cho ◽  
Hyun-chong Cho

Gastric cancer has a high mortality rate worldwide, but it can be prevented with early detection through regular gastroscopy. Herein, we propose a deep learning-based computer-aided diagnosis (CADx) system applying data augmentation to help doctors classify gastroscopy images as normal or abnormal. To improve the performance of deep learning, a large amount of training data are required. However, the collection of medical data, owing to their nature, is highly expensive and time consuming. Therefore, data were generated through deep convolutional generative adversarial networks (DCGAN), and 25 augmentation policies optimized for the CIFAR-10 dataset were implemented through AutoAugment to augment the data. Accordingly, a gastroscopy image was augmented, only high-quality images were selected through an image quality-measurement method, and gastroscopy images were classified as normal or abnormal through the Xception network. We compared the performances of the original training dataset, which did not improve, the dataset generated through the DCGAN, the dataset augmented through the augmentation policies of CIFAR-10, and the dataset combining the two methods. The dataset combining the two methods delivered the best performance in terms of accuracy (0.851) and achieved an improvement of 0.06 over the original training dataset. We confirmed that augmenting data through the DCGAN and CIFAR-10 augmentation policies is most suitable for the classification model for normal and abnormal gastric endoscopy images. The proposed method not only solves the medical-data problem but also improves the accuracy of gastric disease diagnosis.


2020 ◽  
Author(s):  
Yang Liu ◽  
Lu Meng ◽  
Jianping Zhong

Abstract Background: For deep learning, the size of the dataset greatly affects the final training effect. However, in the field of computer-aided diagnosis, medical image datasets are often limited and even scarce.Methods: We aim to synthesize medical images and enlarge the size of the medical image dataset. In the present study, we synthesized the liver CT images with a tumor based on the mask attention generative adversarial network (MAGAN). We masked the pixels of the liver tumor in the image as the attention map. And both the original image and attention map were loaded into the generator network to obtain the synthesized images. Then the original images, the attention map, and the synthesized images were all loaded into the discriminator network to determine if the synthesized images were real or fake. Finally, we can use the generator network to synthesize liver CT images with a tumor.Results: The experiments showed that our method outperformed the other state-of-the-art methods, and can achieve a mean peak signal-to-noise ratio (PSNR) as 64.72dB.Conclusions: All these results indicated that our method can synthesize liver CT images with tumor, and build large medical image dataset, which may facilitate the progress of medical image analysis and computer-aided diagnosis.


2013 ◽  
Vol 10 (8) ◽  
pp. 1922-1929 ◽  
Author(s):  
Mai S. Mabrouk ◽  
Mariam Sheha ◽  
Amr A. Sharawy

Melanoma is considered as one of the most malignant, metastatic and dangerous form of skin cancer that may cause death. The curability and survival of this type of skin cancer depends directly on the diagnosis and removal of melanoma in its early stages. The accuracy of the clinical diagnosis of melanoma with the unaided eye is only about 60% depending only on the knowledge and experience that each doctor has accumulated. The need to the Computer-Aided Diagnosis system (CAD) is increased to be used as a non-invasive supporting tool for physicians as a second opinion to increase the accuracy of detection, as well contributing information about the essential optical characteristics for identifying them. The ultimate aim of this research is to design an automated low cost computer aided diagnosis system of melanoma skin cancer to increase system flexibility, availability. Also, investigate to what extent melanoma diagnosis can be impacted using clinical photographic images instead of using dermoscopic ones, regarding that both are applied upon the same automatic diagnosis system. Texture features was extracted from 140 pigmented skin lesion (PSL) based on Grey level Co-occurrence matrix (GLCM), effective features are selected by fisher score ranking and then classified using Artificial Neural Network (ANN), the whole system is processed through an interactive Graphical User Interface (GUI) to achieve simplicity. Results revealed the high performance of the proposed CAD system to discriminate melanoma from melanocytic skin tumors using texture analysis when applied on clinical photographic images with prediction accuracy of 100 % for the training phase and 91 % for the testing phase. Also, results indicated that using this type of images provides high prediction accuracy for melanoma diagnosis relevant to dermoscopic images considering that photographic clinical images are acquired using less expensive consumer which exhibit a certain degree of accuracy toward the edges of our field of view.


2020 ◽  
Author(s):  
Belén Vega-Márquez ◽  
Cristina Rubio-Escudero ◽  
Isabel Nepomuceno-Chamorro

Abstract The generation of synthetic data is becoming a fundamental task in the daily life of any organization due to the new protection data laws that are emerging. Because of the rise in the use of Artificial Intelligence, one of the most recent proposals to address this problem is the use of Generative Adversarial Networks (GANs). These types of networks have demonstrated a great capacity to create synthetic data with very good performance. The goal of synthetic data generation is to create data that will perform similarly to the original dataset for many analysis tasks, such as classification. The problem of GANs is that in a classification problem, GANs do not take class labels into account when generating new data, it is treated as any other attribute. This research work has focused on the creation of new synthetic data from datasets with different characteristics with a Conditional Generative Adversarial Network (CGAN). CGANs are an extension of GANs where the class label is taken into account when the new data is generated. The performance of our results has been measured in two different ways: firstly, by comparing the results obtained with classification algorithms, both in the original datasets and in the data generated; secondly, by checking that the correlation between the original data and those generated is minimal.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e22018-e22018
Author(s):  
Abir Belaala ◽  
Yazid Bourezane ◽  
Labib Sadek Terrissa ◽  
Zeina Al Masry ◽  
Noureddine Zerhouni

e22018 Background: The prevalence of skin cancer is increasing worldwide. According to World Health Organization (WHO),there is one in every three cancers diagnosed in US is a skin cancer. Traditional ways for skin cancer diagnosis have shown many limitations: inadequate accuracy, consume much time, and effort. In order to assist dermatologists for earlier and accurate diagnosis, we propose to develop a computer aided diagnosis systems for automatic classification of skin lesions. Deep learning architectures are used in this area based on a new convolutional neural network that can classify skin lesions with improved accuracy. Methods: A public dataset of skin lesions HAM10000 ("Human Against Machine with 10000 training images") is used for training and testing. For the validation of our work, a private dataset is collected from a dermatology office in Besançon (France). This dataset contains 45 different dermatoscopic images of skin lesions (Basal cell carcinoma, squamous cell carcinoma and Actinic keratosis) with their histology results. In this research, a three-phase approach was proposed and implemented: Phase one is preprocessing the data; by amputate missing values using the mean filling method. The dermoscopy images in the dataset were downscaled to 224X224 pixels. Then, data augmentation was applied to solve the imbalanced data problem. Finally, the ten-fold cross-validation method was applied to compare the performance of three CNN architectures used in literature: DenseNet 201, ResNet 152, and VGGNet with our proposed architecture. Results: Results obtained with our model show the highest classification accuracy 0.95, a sensitivity of 0, 96, a specificity of 0.94, and outperforms other algorithms in classifying these skin lesions. Conclusions: Our research improves the performance of computer aided diagnosis systems for skin lesions by giving an accurate classification. The use of this system helps dermatologists to make accurate classification with lower time, cost, and effort. Our future work will focus on generalizing the domain by developing a model that can classify various lesions using various types of data (dermoscopic images, histological images, clinical data, sensors data...etc) using the advanced techniques in literature of transfer learning and adaptors models.


Sign in / Sign up

Export Citation Format

Share Document