scholarly journals Optimization of the Convolutional Neural Networks for Automatic Detection of Skin Cancer

Open Medicine ◽  
2020 ◽  
Vol 15 (1) ◽  
pp. 27-37 ◽  
Author(s):  
Long Zhang ◽  
Hong Jie Gao ◽  
Jianhua Zhang ◽  
Benjamin Badami

AbstractConvolutional neural networks (CNNs) are a branch of deep learning which have been turned into one of the popular methods in different applications, especially medical imaging. One of the significant applications in this category is to help specialists make an early detection of skin cancer in dermoscopy and can reduce mortality rate. However, there are a lot of reasons that affect system diagnosis accuracy. In recent years, the utilization of computer-aided technology for this purpose has been turned into an interesting category for scientists. In this research, a meta-heuristic optimized CNN classifier is applied for pre-trained network models for visual datasets with the purpose of classifying skin cancer images. However there are different methods about optimizing the learning step of neural networks, and there are few studies about the deep learning based neural networks and their applications. In the present work, a new approach based on whale optimization algorithm is utilized for optimizing the weight and biases in the CNN models. The new method is then compared with 10 popular classifiers on two skin cancer datasets including DermIS Digital Database Dermquest Database. Experimental results show that the use of this optimized method performs with better accuracy than other classification methods.

2019 ◽  
Vol 11 (23) ◽  
pp. 2858 ◽  
Author(s):  
Tianyu Ci ◽  
Zhen Liu ◽  
Ying Wang

We propose a new convolutional neural networks method in combination with ordinal regression aiming at assessing the degree of building damage caused by earthquakes with aerial imagery. The ordinal regression model and a deep learning algorithm are incorporated to make full use of the information to improve the accuracy of the assessment. A new loss function was introduced in this paper to combine convolutional neural networks and ordinal regression. Assessing the level of damage to buildings can be considered as equivalent to predicting the ordered labels of buildings to be assessed. In the existing research, the problem has usually been simplified as a problem of pure classification to be further studied and discussed, which ignores the ordinal relationship between different levels of damage, resulting in a waste of information. Data accumulated throughout history are used to build network models for assessing the level of damage, and models for assessing levels of damage to buildings based on deep learning are described in detail, including model construction, implementation methods, and the selection of hyperparameters, and verification is conducted by experiments. When categorizing the damage to buildings into four types, we apply the method proposed in this paper to aerial images acquired from the 2014 Ludian earthquake and achieve an overall accuracy of 77.39%; when categorizing damage to buildings into two types, the overall accuracy of the model is 93.95%, exceeding such values in similar types of theories and methods.


Author(s):  
Ravi Manne, Snigdha Kantheti and Sneha Kantheti

Background: Skin cancer classificationusing convolutional neural networks (CNNs) proved better results in classifying skin lesions compared with dermatologists which is lifesaving in terms of diagnosing. This will help people diagnosetheir cancer on their own by just installing app on mobile devices. It is estimated that 6.3 billion people will use the subscriptions by the end of year 2021[28] for diagnosing their skin cancer. Objective: This study represents review of many research articles on classifying skin lesions using CNNs. With the recent enhancement in machine learning algorithms, misclassification rate of skin lesions has reduced compared to a dermatologist classifying them.In this article we discuss how using CNNs has evolved in successfully classifying skin cancer type, and methods implemented, and the success rate. Even though Deep learning using CNN has advantages compared to a dermatologist, it also has some vulnerabilities, in terms of misclassifying images under some Criteria, and situations. We also discuss about those Vulnerabilities in this review study. Methods: We searched theScienceDirect, PubMed,Elsevier, Web of Science databases and Google Scholar for original research articles that are published. We selected papers that have sufficient data and information regarding their research, and we created a review on their approaches and methods they have used. From the articles we searched online So far no review paper has discussed both opportunities and vulnerabilities that existed in skin cancer classification using deep learning. Conclusions: The improvements in machine learning, Deep learning techniques, can avoid human mistakes that could be possible in misclassifying and diagnosing the disease. We will discuss, how Deep learning using CNN helped us and its vulnerabilities.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 990
Author(s):  
Guido Bologna ◽  
Silvio Fossati

The explanation of the decisions provided by a model are crucial in a domain such as medical diagnosis. With the advent of deep learning, it is very important to explain why a classification is reached by a model. This work tackles the transparency problem of convolutional neural networks(CNNs). We propose to generate propositional rules from CNNs, because they are intuitive to the way humans reason. Our method considers that a CNN is the union of two subnetworks: a multi-layer erceptron (MLP) in the fully connected layers; and a subnetwork including several 2D convolutional layers and max-pooling layers. Rule extraction exhibits two main steps, with each step generating rules from each subnetwork of the CNN. In practice, we approximate the two subnetworks by two particular MLP models that makes it possible to generate propositional rules. We performed the experiments with two datasets involving images: MNISTdigit recognition; and skin-cancer diagnosis. With high fidelity, the extracted rules designated the location of discriminant pixels, as well as the conditions that had to be met to achieve the classification. We illustrated several examples of rules by their centroids and their discriminant pixels.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


2021 ◽  
Vol 11 (5) ◽  
pp. 2284
Author(s):  
Asma Maqsood ◽  
Muhammad Shahid Farid ◽  
Muhammad Hassan Khan ◽  
Marcin Grzegorzek

Malaria is a disease activated by a type of microscopic parasite transmitted from infected female mosquito bites to humans. Malaria is a fatal disease that is endemic in many regions of the world. Quick diagnosis of this disease will be very valuable for patients, as traditional methods require tedious work for its detection. Recently, some automated methods have been proposed that exploit hand-crafted feature extraction techniques however, their accuracies are not reliable. Deep learning approaches modernize the world with their superior performance. Convolutional Neural Networks (CNN) are vastly scalable for image classification tasks that extract features through hidden layers of the model without any handcrafting. The detection of malaria-infected red blood cells from segmented microscopic blood images using convolutional neural networks can assist in quick diagnosis, and this will be useful for regions with fewer healthcare experts. The contributions of this paper are two-fold. First, we evaluate the performance of different existing deep learning models for efficient malaria detection. Second, we propose a customized CNN model that outperforms all observed deep learning models. It exploits the bilateral filtering and image augmentation techniques for highlighting features of red blood cells before training the model. Due to image augmentation techniques, the customized CNN model is generalized and avoids over-fitting. All experimental evaluations are performed on the benchmark NIH Malaria Dataset, and the results reveal that the proposed algorithm is 96.82% accurate in detecting malaria from the microscopic blood smears.


2021 ◽  
Vol 12 (3) ◽  
pp. 46-47
Author(s):  
Nikita Saxena

Space-borne satellite radiometers measure Sea Surface Temperature (SST), which is pivotal to studies of air-sea interactions and ocean features. Under clear sky conditions, high resolution measurements are obtainable. But under cloudy conditions, data analysis is constrained to the available low resolution measurements. We assess the efficiency of Deep Learning (DL) architectures, particularly Convolutional Neural Networks (CNN) to downscale oceanographic data from low spatial resolution (SR) to high SR. With a focus on SST Fields of Bay of Bengal, this study proves that Very Deep Super Resolution CNN can successfully reconstruct SST observations from 15 km SR to 5km SR, and 5km SR to 1km SR. This outcome calls attention to the significance of DL models explicitly trained for the reconstruction of high SR SST fields by using low SR data. Inference on DL models can act as a substitute to the existing computationally expensive downscaling technique: Dynamical Downsampling. The complete code is available on this Github Repository.


Sign in / Sign up

Export Citation Format

Share Document