scholarly journals Multimodal Multi-tasking for Skin Lesion Classification Using Deep Neural Networks

2021 ◽  
pp. 27-38
Author(s):  
Rafaela Carvalho ◽  
João Pedrosa ◽  
Tudor Nedelcu

AbstractSkin cancer is one of the most common types of cancer and, with its increasing incidence, accurate early diagnosis is crucial to improve prognosis of patients. In the process of visual inspection, dermatologists follow specific dermoscopic algorithms and identify important features to provide a diagnosis. This process can be automated as such characteristics can be extracted by computer vision techniques. Although deep neural networks can extract useful features from digital images for skin lesion classification, performance can be improved by providing additional information. The extracted pseudo-features can be used as input (multimodal) or output (multi-tasking) to train a robust deep learning model. This work investigates the multimodal and multi-tasking techniques for more efficient training, given the single optimization of several related tasks in the latter, and generation of better diagnosis predictions. Additionally, the role of lesion segmentation is also studied. Results show that multi-tasking improves learning of beneficial features which lead to better predictions, and pseudo-features inspired by the ABCD rule provide readily available helpful information about the skin lesion.

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Rama K. Vasudevan ◽  
Maxim Ziatdinov ◽  
Lukas Vlcek ◽  
Sergei V. Kalinin

AbstractDeep neural networks (‘deep learning’) have emerged as a technology of choice to tackle problems in speech recognition, computer vision, finance, etc. However, adoption of deep learning in physical domains brings substantial challenges stemming from the correlative nature of deep learning methods compared to the causal, hypothesis driven nature of modern science. We argue that the broad adoption of Bayesian methods incorporating prior knowledge, development of solutions with incorporated physical constraints and parsimonious structural descriptors and generative models, and ultimately adoption of causal models, offers a path forward for fundamental and applied research.


2020 ◽  
Vol 10 (2) ◽  
pp. 57-65
Author(s):  
Kaan Karakose ◽  
Metin Bilgin

In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. Humans and animals learn much better when gradually presented in a meaningful order showing more concepts and complex samples rather than randomly presenting the information. The use of such training strategies in the context of artificial neural networks is called curriculum learning. In this study, a strategy was developed for curriculum learning. Using the CIFAR-10 and CIFAR-100 training sets, the last few layers of the pre-trained on ImageNet Xception model were trained to keep the training set knowledge in the model’s weight. Finally, a much smaller model was trained with the sample sorting methods presented using these difficulty levels. The findings obtained in this study show that the accuracy value generated when trained by the method we provided with the accuracy value trained with randomly mixed data was more than 1% for each epoch.   Keywords: Curriculum learning, model distillation, deep learning, academia, neural networks.


2020 ◽  
Vol 10 (7) ◽  
pp. 2488 ◽  
Author(s):  
Muhammad Naseer Bajwa ◽  
Kaoru Muta ◽  
Muhammad Imran Malik ◽  
Shoaib Ahmed Siddiqui ◽  
Stephan Alexander Braun ◽  
...  

Propensity of skin diseases to manifest in a variety of forms, lack and maldistribution of qualified dermatologists, and exigency of timely and accurate diagnosis call for automated Computer-Aided Diagnosis (CAD). This study aims at extending previous works on CAD for dermatology by exploring the potential of Deep Learning to classify hundreds of skin diseases, improving classification performance, and utilizing disease taxonomy. We trained state-of-the-art Deep Neural Networks on two of the largest publicly available skin image datasets, namely DermNet and ISIC Archive, and also leveraged disease taxonomy, where available, to improve classification performance of these models. On DermNet we establish new state-of-the-art with 80% accuracy and 98% Area Under the Curve (AUC) for classification of 23 diseases. We also set precedence for classifying all 622 unique sub-classes in this dataset and achieved 67% accuracy and 98% AUC. On ISIC Archive we classified all 7 diseases with 93% average accuracy and 99% AUC. This study shows that Deep Learning has great potential to classify a vast array of skin diseases with near-human accuracy and far better reproducibility. It can have a promising role in practical real-time skin disease diagnosis by assisting physicians in large-scale screening using clinical or dermoscopic images.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1238
Author(s):  
Yunhee Woo ◽  
Dongyoung Kim ◽  
Jaemin Jeong ◽  
Young-Woong Ko ◽  
Jeong-Gun Lee

Recent deep learning models succeed in achieving high accuracy and fast inference time, but they require high-performance computing resources because they have a large number of parameters. However, not all systems have high-performance hardware. Sometimes, a deep learning model needs to be run on edge devices such as IoT devices or smartphones. On edge devices, however, limited computing resources are available and the amount of computation must be reduced to launch the deep learning models. Pruning is one of the well-known approaches for deriving light-weight models by eliminating weights, channels or filters. In this work, we propose “zero-keep filter pruning” for energy-efficient deep neural networks. The proposed method maximizes the number of zero elements in filters by replacing small values with zero and pruning the filter that has the lowest number of zeros. In the conventional approach, the filters that have the highest number of zeros are generally pruned. As a result, through this zero-keep filter pruning, we can have the filters that have many zeros in a model. We compared the results of the proposed method with the random filter pruning and proved that our method shows better performance with many fewer non-zero elements with a marginal drop in accuracy. Finally, we discuss a possible multiplier architecture, zero-skip multiplier circuit, which skips the multiplications with zero to accelerate and reduce energy consumption.


2021 ◽  
Vol 11 (2) ◽  
pp. 643
Author(s):  
Sukho Lee ◽  
Hyein Kim ◽  
Byeongseon Jeong ◽  
Jungho Yoon

Over the past decade, deep learning-based computer vision methods have been shown to surpass previous state-of-the-art computer vision techniques in various fields, and have made great progress in various computer vision problems, including object detection, object segmentation, face recognition, etc. Nowadays, major IT companies are adding new deep-learning-based computer technologies to edge devices such as smartphones. However, since the computational cost of deep learning-based models is still high for edge devices, research is being actively carried out to compress deep learning-based models while not sacrificing high performance. Recently, many lightweight architectures have been proposed for deep learning-based models which are based on low-rank approximation. In this paper, we propose an alternating tensor compose-decompose (ATCD) method for the training of low-rank convolutional neural networks. The proposed training method can better train a compressed low-rank deep learning model than the conventional fixed-structure based training method, so that a compressed deep learning model with higher performance can be obtained in the end of the training. As a representative and exemplary model to which the proposed training method can be applied, we propose a rank-1 convolutional neural network (CNN) which has a structure alternatively containing 3-D rank-1 filters and 1-D filters in the training stage and a 1-D structure in the testing stage. After being trained, the 3-D rank-1 filters can be permanently decomposed into 1-D filters to achieve a fast inference in the test time. The reason that the 1-D filters are not being trained directly in 1-D form in the training stage is that the training of the 3-D rank-1 filters is easier due to the better gradient flow, which makes the training possible even in the case when the fixed structured network with fixed consecutive 1-D filters cannot be trained at all. We also show that the same training method can be applied to the well-known MobileNet architecture so that better parameters can be obtained than with the conventional fixed-structure training method. Furthermore, we show that the 1-D filters in a ResNet like structure can also be trained with the proposed method, which shows the fact that the proposed method can be applied to various structures of networks.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1350
Author(s):  
Andreas Krug ◽  
Maral Ebrahimzadeh ◽  
Jost Alemann ◽  
Jens Johannsmeier ◽  
Sebastian Stober

Deep Learning-based Automatic Speech Recognition (ASR) models are very successful, but hard to interpret. To gain a better understanding of how Artificial Neural Networks (ANNs) accomplish their tasks, several introspection methods have been proposed. However, established introspection techniques are mostly designed for computer vision tasks and rely on the data being visually interpretable, which limits their usefulness for understanding speech recognition models. To overcome this limitation, we developed a novel neuroscience-inspired technique for visualizing and understanding ANNs, called Saliency-Adjusted Neuron Activation Profiles (SNAPs). SNAPs are a flexible framework to analyze and visualize Deep Neural Networks that does not depend on visually interpretable data. In this work, we demonstrate how to utilize SNAPs for understanding fully-convolutional ASR models. This includes visualizing acoustic concepts learned by the model and the comparative analysis of their representations in the model layers.


2021 ◽  
Vol 11 (6) ◽  
pp. 7757-7762
Author(s):  
K. Aldriwish

Internet of Things (IoT) -based systems need to be up to date on cybersecurity threats. The security of IoT networks is challenged by software piracy and malware attacks, and much important information can be stolen and used for cybercrimes. This paper attempts to improve IoT cybersecurity by proposing a combined model based on deep learning to detect malware and software piracy across the IoT network. The malware’s model is based on Deep Convolutional Neural Networks (DCNNs). Apart from this, TensorFlow Deep Neural Networks (TFDNNs) are introduced to detect software piracy threats according to source code plagiarism. The investigation is conducted on the Google Code Jam (GCJ) dataset. The conducted experiments prove that the classification performance achieves high accuracy of about 98%.


Sign in / Sign up

Export Citation Format

Share Document