Skin cancer and deep learning for dermoscopic images classification: A pilot study.

2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e22018-e22018
Author(s):  
Abir Belaala ◽  
Yazid Bourezane ◽  
Labib Sadek Terrissa ◽  
Zeina Al Masry ◽  
Noureddine Zerhouni

e22018 Background: The prevalence of skin cancer is increasing worldwide. According to World Health Organization (WHO),there is one in every three cancers diagnosed in US is a skin cancer. Traditional ways for skin cancer diagnosis have shown many limitations: inadequate accuracy, consume much time, and effort. In order to assist dermatologists for earlier and accurate diagnosis, we propose to develop a computer aided diagnosis systems for automatic classification of skin lesions. Deep learning architectures are used in this area based on a new convolutional neural network that can classify skin lesions with improved accuracy. Methods: A public dataset of skin lesions HAM10000 ("Human Against Machine with 10000 training images") is used for training and testing. For the validation of our work, a private dataset is collected from a dermatology office in Besançon (France). This dataset contains 45 different dermatoscopic images of skin lesions (Basal cell carcinoma, squamous cell carcinoma and Actinic keratosis) with their histology results. In this research, a three-phase approach was proposed and implemented: Phase one is preprocessing the data; by amputate missing values using the mean filling method. The dermoscopy images in the dataset were downscaled to 224X224 pixels. Then, data augmentation was applied to solve the imbalanced data problem. Finally, the ten-fold cross-validation method was applied to compare the performance of three CNN architectures used in literature: DenseNet 201, ResNet 152, and VGGNet with our proposed architecture. Results: Results obtained with our model show the highest classification accuracy 0.95, a sensitivity of 0, 96, a specificity of 0.94, and outperforms other algorithms in classifying these skin lesions. Conclusions: Our research improves the performance of computer aided diagnosis systems for skin lesions by giving an accurate classification. The use of this system helps dermatologists to make accurate classification with lower time, cost, and effort. Our future work will focus on generalizing the domain by developing a model that can classify various lesions using various types of data (dermoscopic images, histological images, clinical data, sensors data...etc) using the advanced techniques in literature of transfer learning and adaptors models.

2020 ◽  
Author(s):  
Mugahed A. Al-antari ◽  
Cam-Hao Hua ◽  
Sungyoung Lee ◽  
Jaehun Bang

Abstract Coronavirus disease 2019 (COVID-19) is a novel harmful respiratory disease that has rapidly spread worldwide. At the end of 2019, COVID-19 emerged as a previously unknown respiratory disease in Wuhan, Hubei Province, China. The world health organization (WHO) declared the coronavirus outbreak a pandemic in the second week of March 2020. Simultaneous deep learning detection and classification of COVID-19 based on the full resolution of digital X-ray images is the key to efficiently assisting patients by enabling physicians to reach a fast and accurate diagnosis decision. In this paper, a simultaneous deep learning computer-aided diagnosis (CAD) system based on the YOLO predictor is proposed that can detect and diagnose COVID-19, differentiating it from eight other respiratory diseases: atelectasis, infiltration, pneumothorax, masses, effusion, pneumonia, cardiomegaly, and nodules. The proposed CAD system was assessed via five-fold tests for the multi-class prediction problem using two different databases of chest X-ray images: COVID-19 and ChestX-ray8. The proposed CAD system was trained with an annotated training set of 50,490 chest X-ray images. The regions on the entire X-ray images with lesions suspected of being due to COVID-19 were simultaneously detected and classified end-to-end via the proposed CAD predictor, achieving overall detection and classification accuracies of 96.31% and 97.40%, respectively. Most test images from patients with confirmed COVID-19 and other respiratory diseases were correctly predicted, achieving average intersection over union (IoU) greater than 90%. Applying deep learning regularizers of data balancing and augmentation improved the COVID-19 diagnostic performance by 6.64% and 12.17% in terms of the overall accuracy and the F1-score, respectively. It is feasible to achieve a diagnosis based on individual chest X-ray images with the proposed CAD system within 0.0093 s. Thus, the CAD system presented in this paper can make a prediction at the rate of 108 frames/s (FPS), which is close to real-time. The proposed deep learning CAD system can reliably differentiate COVID-19 from other respiratory diseases. The proposed deep learning model seems to be a reliable tool that can be used to practically assist health care systems, patients, and physicians.


2019 ◽  
Vol 5 (1) ◽  
pp. 223-226
Author(s):  
Max-Heinrich Laves ◽  
Sontje Ihler ◽  
Tobias Ortmaier ◽  
Lüder A. Kahrs

AbstractIn this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.


Diagnostics ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 694
Author(s):  
Xuejiao Pang ◽  
Zijian Zhao ◽  
Ying Weng

At present, the application of artificial intelligence (AI) based on deep learning in the medical field has become more extensive and suitable for clinical practice compared with traditional machine learning. The application of traditional machine learning approaches to clinical practice is very challenging because medical data are usually uncharacteristic. However, deep learning methods with self-learning abilities can effectively make use of excellent computing abilities to learn intricate and abstract features. Thus, they are promising for the classification and detection of lesions through gastrointestinal endoscopy using a computer-aided diagnosis (CAD) system based on deep learning. This study aimed to address the research development of a CAD system based on deep learning in order to assist doctors in classifying and detecting lesions in the stomach, intestines, and esophagus. It also summarized the limitations of the current methods and finally presented a prospect for future research.


Diagnostics ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 393
Author(s):  
Mahsa Mansourian ◽  
Sadaf Khademi ◽  
Hamid Reza Marateb

The World Health Organization (WHO) suggests that mental disorders, neurological disorders, and suicide are growing causes of morbidity. Depressive disorders, schizophrenia, bipolar disorder, Alzheimer’s disease, and other dementias account for 1.84%, 0.60%, 0.33%, and 1.00% of total Disability Adjusted Life Years (DALYs). Furthermore, suicide, the 15th leading cause of death worldwide, could be linked to mental disorders. More than 68 computer-aided diagnosis (CAD) methods published in peer-reviewed journals from 2016 to 2021 were analyzed, among which 75% were published in the year 2018 or later. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol was adopted to select the relevant studies. In addition to the gold standard, the sample size, neuroimaging techniques or biomarkers, validation frameworks, the classifiers, and the performance indices were analyzed. We further discussed how various performance indices are essential based on the biostatistical and data mining perspective. Moreover, critical information related to the Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) guidelines was analyzed. We discussed how balancing the dataset and not using external validation could hinder the generalization of the CAD methods. We provided the list of the critical issues to consider in such studies.


2021 ◽  
Vol 11 (2) ◽  
pp. 760
Author(s):  
Yun-ji Kim ◽  
Hyun Chin Cho ◽  
Hyun-chong Cho

Gastric cancer has a high mortality rate worldwide, but it can be prevented with early detection through regular gastroscopy. Herein, we propose a deep learning-based computer-aided diagnosis (CADx) system applying data augmentation to help doctors classify gastroscopy images as normal or abnormal. To improve the performance of deep learning, a large amount of training data are required. However, the collection of medical data, owing to their nature, is highly expensive and time consuming. Therefore, data were generated through deep convolutional generative adversarial networks (DCGAN), and 25 augmentation policies optimized for the CIFAR-10 dataset were implemented through AutoAugment to augment the data. Accordingly, a gastroscopy image was augmented, only high-quality images were selected through an image quality-measurement method, and gastroscopy images were classified as normal or abnormal through the Xception network. We compared the performances of the original training dataset, which did not improve, the dataset generated through the DCGAN, the dataset augmented through the augmentation policies of CIFAR-10, and the dataset combining the two methods. The dataset combining the two methods delivered the best performance in terms of accuracy (0.851) and achieved an improvement of 0.06 over the original training dataset. We confirmed that augmenting data through the DCGAN and CIFAR-10 augmentation policies is most suitable for the classification model for normal and abnormal gastric endoscopy images. The proposed method not only solves the medical-data problem but also improves the accuracy of gastric disease diagnosis.


Author(s):  
Kamyab Keshtkar

As a relatively high percentage of adenoma polyps are missed, a computer-aided diagnosis (CAD) tool based on deep learning can aid the endoscopist in diagnosing colorectal polyps or colorectal cancer in order to decrease polyps missing rate and prevent colorectal cancer mortality. Convolutional Neural Network (CNN) is a deep learning method and has achieved better results in detecting and segmenting specific objects in images in the last decade than conventional models such as regression, support vector machines or artificial neural networks. In recent years, based on the studies in medical imaging criteria, CNN models have acquired promising results in detecting masses and lesions in various body organs, including colorectal polyps. In this review, the structure and architecture of CNN models and how colonoscopy images are processed as input and converted to the output are explained in detail. In most primary studies conducted in the colorectal polyp detection and classification field, the CNN model has been regarded as a black box since the calculations performed at different layers in the model training process have not been clarified precisely. Furthermore, I discuss the differences between the CNN and conventional models, inspect how to train the CNN model for diagnosing colorectal polyps or cancer, and evaluate model performance after the training process.


Sign in / Sign up

Export Citation Format

Share Document