Outperforming Dermatologist-Level Skin Cancer Classification via Enhanced Training of Deep Neural Networks (Preprint)

2018 ◽  
Author(s):  
Titus Josef Brinker ◽  
Achim Hekler ◽  
Christof von Kalle

BACKGROUND In recent months, multiple publications have demonstrated the use of convolutional neural networks (CNN) to classify images of skin cancer as precisely as dermatologists. These CNNs failed to outperform the International Symposium on Biomedical Imaging (ISBI) 2016 challenge in terms of average precision, however, so the technical progress represented by these studies is limited. In addition, the available reports are difficult to reproduce, due to incomplete descriptions of training procedures and the use of proprietary image databases. These factors prevent the comparison of various CNN classifiers in equal terms. OBJECTIVE To demonstrate the training of an image-classifier CNN that outperforms the winner of the ISBI 2016 challenge by using open source images exclusively. METHODS A detailed description of the training procedure is reported while the used images and test sets are disclosed fully, to insure the reproducibility of our work. RESULTS Our CNN classifier outperforms all recent attempts to classify the original ISBI 2016 challenge test data (full set of 379 test images), with an average precision of 0.709 (vs. 0.637 of the ISBI winner) and with an area under the receiver operating curve of 0.85. CONCLUSIONS This work illustrates the potential for improving skin cancer classification with enhanced training procedures for CNNs, while avoiding the use of costly equipment or proprietary image data.

2020 ◽  
Author(s):  
Abhinav Sagar ◽  
J Dheeba

AbstractIn this work, we address the problem of skin cancer classification using convolutional neural networks. A lot of cancer cases early on are misdiagnosed as something else leading to severe consequences including the death of a patient. Also there are cases in which patients have some other problems and doctors think they might have skin cancer. This leads to unnecessary time and money spent for further diagnosis. In this work, we address both of the above problems using deep neural networks and transfer learning architecture. We have used publicly available ISIC databases for both training and testing our model. Our work achieves an accuracy of 0.935, precision of 0.94, recall of 0.77, F1 score of 0.85 and ROC-AUC of 0.861 which is better than the previous state of the art approaches.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Fuyong Xing ◽  
Yuanpu Xie ◽  
Xiaoshuang Shi ◽  
Pingjun Chen ◽  
Zizhao Zhang ◽  
...  

Abstract Background Nucleus or cell detection is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. Results We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. Conclusions We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


2020 ◽  
Vol 20 (11) ◽  
pp. 6603-6608 ◽  
Author(s):  
Sung-Tae Lee ◽  
Suhwan Lim ◽  
Jong-Ho Bae ◽  
Dongseok Kwon ◽  
Hyeong-Su Kim ◽  
...  

Deep learning represents state-of-the-art results in various machine learning tasks, but for applications that require real-time inference, the high computational cost of deep neural networks becomes a bottleneck for the efficiency. To overcome the high computational cost of deep neural networks, spiking neural networks (SNN) have been proposed. Herein, we propose a hardware implementation of the SNN with gated Schottky diodes as synaptic devices. In addition, we apply L1 regularization for connection pruning of the deep spiking neural networks using gated Schottky diodes as synap-tic devices. Applying L1 regularization eliminates the need for a re-training procedure because it prunes the weights based on the cost function. The compressed hardware-based SNN is energy efficient while achieving a classification accuracy of 97.85% which is comparable to 98.13% of the software deep neural networks (DNN).


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Shazia Akbar ◽  
Mohammad Peikari ◽  
Sherine Salama ◽  
Azadeh Yazdan Panah ◽  
Sharon Nofech-Mozes ◽  
...  

Abstract The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists’ workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.


2018 ◽  
Vol 15 (9) ◽  
pp. 1451-1455 ◽  
Author(s):  
Grant J. Scott ◽  
Kyle C. Hagan ◽  
Richard A. Marcum ◽  
James Alex Hurt ◽  
Derek T. Anderson ◽  
...  

2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Yu Fujinami-Yokokawa ◽  
Nikolas Pontikos ◽  
Lizhu Yang ◽  
Kazushige Tsunoda ◽  
Kazutoshi Yoshitake ◽  
...  

Purpose. To illustrate a data-driven deep learning approach to predicting the gene responsible for the inherited retinal disorder (IRD) in macular dystrophy caused by ABCA4 and RP1L1 gene aberration in comparison with retinitis pigmentosa caused by EYS gene aberration and normal subjects. Methods. Seventy-five subjects with IRD or no ocular diseases have been ascertained from the database of Japan Eye Genetics Consortium; 10 ABCA4 retinopathy, 20 RP1L1 retinopathy, 28 EYS retinopathy, and 17 normal patients/subjects. Horizontal/vertical cross-sectional scans of optical coherence tomography (SD-OCT) at the central fovea were cropped/adjusted to a resolution of 400 pixels/inch with a size of 750 × 500 pix2 for learning. Subjects were randomly split following a 3 : 1 ratio into training and test sets. The commercially available learning tool, Medic mind was applied to this four-class classification program. The classification accuracy, sensitivity, and specificity were calculated during the learning process. This process was repeated four times with random assignment to training and test sets to control for selection bias. For each training/testing process, the classification accuracy was calculated per gene category. Results. A total of 178 images from 75 subjects were included in this study. The mean training accuracy was 98.5%, ranging from 90.6 to 100.0. The mean overall test accuracy was 90.9% (82.0–97.6). The mean test accuracy per gene category was 100% for ABCA4, 78.0% for RP1L1, 89.8% for EYS, and 93.4% for Normal. Test accuracy of RP1L1 and EYS was not high relative to the training accuracy which suggests overfitting. Conclusion. This study highlighted a novel application of deep neural networks in the prediction of the causative gene in IRD retinopathies from SD-OCT, with a high prediction accuracy. It is anticipated that deep neural networks will be integrated into general screening to support clinical/genetic diagnosis, as well as enrich the clinical education.


Nature ◽  
2017 ◽  
Vol 546 (7660) ◽  
pp. 686-686 ◽  
Author(s):  
Andre Esteva ◽  
Brett Kuprel ◽  
Roberto A. Novoa ◽  
Justin Ko ◽  
Susan M. Swetter ◽  
...  

2020 ◽  
Author(s):  
Hojin Jang ◽  
Devin McCormack ◽  
Frank Tong

ABSTRACTDeep neural networks (DNNs) can accurately recognize objects in clear viewing conditions, leading to claims that they have attained or surpassed human-level performance. However, standard DNNs are severely impaired at recognizing objects in visual noise, whereas human vision remains robust. We developed a noise-training procedure, generating noisy images of objects with low signal-to-noise ratio, to investigate whether DNNs can acquire robustness that better matches human vision. After noise training, DNNs outperformed human observers while exhibiting more similar patterns of performance, and provided a better model for predicting human recognition thresholds on an image-by-image basis. Noise training also improved DNN recognition of vehicles in noisy weather. Layer-specific analyses revealed that the contaminating effects of noise were dampened, rather than amplified, across successive stages of the noise-trained network, with greater benefit at higher levels of the network. Our findings indicate that DNNs can learn noise-robust representations that better approximate human visual processing.


2021 ◽  
Author(s):  
Jason Munger ◽  
Carlos W. Morato

This project explores how raw image data obtained from AV cameras can provide a model with more spatial information than can be learned from simple RGB images alone. This paper leverages the advances of deep neural networks to demonstrate steering angle predictions of autonomous vehicles through an end-to-end multi-channel CNN model using only the image data provided from an onboard camera. Image data is processed through existing neural networks to provide pixel segmentation and depth estimates and input to a new neural network along with the raw input image to provide enhanced feature signals from the environment. Various input combinations of Multi-Channel CNNs are evaluated, and their effectiveness is compared to single CNN networks using the individual data inputs. The model with the most accurate steering predictions is identified and performance compared to previous neural networks.


Sign in / Sign up

Export Citation Format

Share Document