scholarly journals Automatic Lesion Segmentation Using Atrous Convolutional Deep Neural Networks in Dermoscopic Skin Cancer Images

Author(s):  
Ranpreet Kaur ◽  
Hamid GholamHosseini ◽  
Roopak Sinha

Abstract Background: Among skin cancers, melanoma is the most dangerous and aggressive form, exhibiting a high mortality rate worldwide. Biopsy and histopatholog-ical analysis are common procedures for skin cancer detection and prevention in clinical settings. A significant step involved in the diagnosis process is the deep understanding of patterns, size, color, and structure of lesions based on images obtained through dermatoscopes for the infected area. However, the manual seg-mentation of the lesion region is time-consuming because the lesion evolves and changes its shape over time which makes its prediction challenging. Moreover, at the initial stage, it is difficult to predict melanoma as it closely resembles other skin cancer types that are not malignant as melanoma, thus automatic segmentation techniques are required to design a computer-aided system for accurate and timely detection. Methods: As deep learning approaches have gained high attention in recent years due to their remarkable performance, therefore, in this work, we proposed a novel, end-to-end atrous spatial pyramid pooling based convolutional neural network (CNN) framework for automatic lesion segmentation. This architecture is built based on the concept of atrous dilated convolutions which are effective for semantic segmentation. A dense deep neural network is designed using several building blocks consisting of convolutional, batch normalization, leaky ReLU layer with fine-tuning of hyperparameters contributing towards higher performance. Conclusion: The network was tested using three benchmark datasets by International Skin Imaging Collaboration, i.e. ISIC 2016, ISIC 2017, and ISIC 2018. The experimental results showed that the proposed network achieved an average Jac-card index of 86.5% on ISIC 2016, 81.2% on ISIC 2017, and 81.2% on ISIC 2018 datasets, respectively which is recorded as higher than the top three winners of the ISIC challenge. Also, the model successfully extracts lesions from the whole image in one pass, requiring no pre-processing process. The conclusions yielded that network is accurate in performing lesion segmentation on skin cancer images.

2021 ◽  
Vol 309 ◽  
pp. 01117
Author(s):  
A. Sai Hanuman ◽  
G. Prasanna Kumar

Studies on lane detection Lane identification methods, integration, and evaluation strategies square measure all examined. The system integration approaches for building a lot of strong detection systems are then evaluated and analyzed, taking into account the inherent limits of camera-based lane detecting systems. Present deep learning approaches to lane detection are inherently CNN's semantic segmentation network the results of the segmentation of the roadways and the segmentation of the lane markers are fused using a fusion method. By manipulating a huge number of frames from a continuous driving environment, we examine lane detection, and we propose a hybrid deep architecture that combines the convolution neural network (CNN) and the continuous neural network (CNN) (RNN). Because of the extensive information background and the high cost of camera equipment, a substantial number of existing results concentrate on vision-based lane recognition systems. Extensive tests on two large-scale datasets show that the planned technique outperforms rivals' lane detection strategies, particularly in challenging settings. A CNN block in particular isolates information from each frame before sending the CNN choices of several continuous frames with time-series qualities to the RNN block for feature learning and lane prediction.


2021 ◽  
Author(s):  
Noor Ahmad ◽  
Muhammad Aminu ◽  
Mohd Halim Mohd Noor

Deep learning approaches have attracted a lot of attention in the automatic detection of Covid-19 and transfer learning is the most common approach. However, majority of the pre-trained models are trained on color images, which can cause inefficiencies when fine-tuning the models on Covid-19 images which are often grayscale. To address this issue, we propose a deep learning architecture called CovidNet which requires a relatively smaller number of parameters. CovidNet accepts grayscale images as inputs and is suitable for training with limited training dataset. Experimental results show that CovidNet outperforms other state-of-the-art deep learning models for Covid-19 detection.


2021 ◽  
Author(s):  
Noor Ahmad ◽  
Muhammad Aminu ◽  
Mohd Halim Mohd Noor

Deep learning approaches have attracted a lot of attention in the automatic detection of Covid-19 and transfer learning is the most common approach. However, majority of the pre-trained models are trained on color images, which can cause inefficiencies when fine-tuning the models on Covid-19 images which are often grayscale. To address this issue, we propose a deep learning architecture called CovidNet which requires a relatively smaller number of parameters. CovidNet accepts grayscale images as inputs and is suitable for training with limited training dataset. Experimental results show that CovidNet outperforms other state-of-the-art deep learning models for Covid-19 detection.


10.2196/18438 ◽  
2020 ◽  
Vol 3 (1) ◽  
pp. e18438
Author(s):  
Arnab Ray ◽  
Aman Gupta ◽  
Amutha Al

Background Skin cancer is the most common cancer and is often ignored by people at an early stage. There are 5.4 million new cases of skin cancer worldwide every year. Deaths due to skin cancer could be prevented by early detection of the mole. Objective We propose a skin lesion classification system that has the ability to detect such moles at an early stage and is able to easily differentiate between a cancerous and noncancerous mole. Using this system, we would be able to save time and resources for both patients and practitioners. Methods We created a deep convolutional neural network using an Inceptionv3 and DenseNet-201 pretrained model. Results We found that using the concepts of fine-tuning and the ensemble learning model yielded superior results. Furthermore, fine-tuning the whole model helped models converge faster compared to fine-tuning only the top layers, giving better accuracy overall. Conclusions Based on our research, we conclude that deep learning algorithms are highly suitable for classifying skin cancer images.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Shahab U. Ansari ◽  
Kamran Javed ◽  
Saeed Mian Qaisar ◽  
Rashad Jillani ◽  
Usman Haider

Multiple sclerosis (MS) is a chronic and autoimmune disease that forms lesions in the central nervous system. Quantitative analysis of these lesions has proved to be very useful in clinical trials for therapies and assessing disease prognosis. However, the efficacy of these quantitative analyses greatly depends on how accurately the MS lesions have been identified and segmented in brain MRI. This is usually carried out by radiologists who label 3D MR images slice by slice using commonly available segmentation tools. However, such manual practices are time consuming and error prone. To circumvent this problem, several automatic segmentation techniques have been investigated in recent years. In this paper, we propose a new framework for automatic brain lesion segmentation that employs a novel convolutional neural network (CNN) architecture. In order to segment lesions of different sizes, we have to pick a specific filter or size 3 × 3 or 5 × 5. Sometimes, it is hard to decide which filter will work better to get the best results. Google Net has solved this problem by introducing an inception module. An inception module uses 3 × 3 , 5 × 5 , 1 × 1 and max pooling filters in parallel fashion. Results show that incorporating inception modules in a CNN has improved the performance of the network in the segmentation of MS lesions. We compared the results of the proposed CNN architecture for two loss functions: binary cross entropy (BCE) and structural similarity index measure (SSIM) using the publicly available ISBI-2015 challenge dataset. A score of 93.81 which is higher than the human rater with BCE loss function is achieved.


2021 ◽  
Vol 45 (1) ◽  
pp. 122-129
Author(s):  
Dang N.H. Thanh ◽  
Nguyen Hoang Hai ◽  
Le Minh Hieu ◽  
Prayag Tiwari ◽  
V.B. Surya Prasath

Melanoma skin cancer is one of the most dangerous forms of skin cancer because it grows fast and causes most of the skin cancer deaths. Hence, early detection is a very important task to treat melanoma. In this article, we propose a skin lesion segmentation method for dermoscopic images based on the U-Net architecture with VGG-16 encoder and the semantic segmentation. Base on the segmented skin lesion, diagnostic imaging systems can evaluate skin lesion features to classify them. The proposed method requires fewer resources for training, and it is suitable for computing systems without powerful GPUs, but the training accuracy is still high enough (above 95 %). In the experiments, we train the model on the ISIC dataset – a common dermoscopic image dataset. To assess the performance of the proposed skin lesion segmentation method, we evaluate the Sorensen-Dice and the Jaccard scores and compare to other deep learning-based skin lesion segmentation methods. Experimental results showed that skin lesion segmentation quality of the proposed method are better than ones of the compared methods.


Author(s):  
E. Bousias Alexakis ◽  
C. Armenakis

Abstract. Over the past few years, many research works have utilized Convolutional Neural Networks (CNN) in the development of fully automated change detection pipelines from high resolution satellite imagery. Even though CNN architectures can achieve state-of-the-art results in a wide variety of vision tasks, including change detection applications, they require extensive amounts of labelled training examples in order to be able to generalize to new data through supervised learning. In this work we experiment with the implementation of a semi-supervised training approach in an attempt to improve the image semantic segmentation performance of models trained using a small number of labelled image pairs by leveraging information from additional unlabelled image samples. The approach is based on the Mean Teacher method, a semi-supervised approach, successfully applied for image classification and for sematic segmentation of medical images. Mean Teacher uses an exponential moving average of the model weights from previous epochs to check the consistency of the model’s predictions under various perturbations. Our goal is to examine whether its application in a change detection setting can result in analogous performance improvements. The preliminary results of the proposed method appear to be compatible to the results of the traditional fully supervised training. Research is continuing towards fine-tuning of the method and reaching solid conclusions with respect to the potential benefits of the semi-supervised learning approaches in image change detection applications.


Author(s):  
Y. Chen ◽  
W. Gao ◽  
E. Widyaningrum ◽  
M. Zheng ◽  
K. Zhou

<p><strong>Abstract.</strong> Semantic segmentation, especially for buildings, from the very high resolution (VHR) airborne images is an important task in urban mapping applications. Nowadays, the deep learning has significantly improved and applied in computer vision applications. Fully Convolutional Networks (FCN) is one of the tops voted method due to their good performance and high computational efficiency. However, the state-of-art results of deep nets depend on the training on large-scale benchmark datasets. Unfortunately, the benchmarks of VHR images are limited and have less generalization capability to another area of interest. As existing high precision base maps are easily available and objects are not changed dramatically in an urban area, the map information can be used to label images for training samples. Apart from object changes between maps and images due to time differences, the maps often cannot perfectly match with images. In this study, the main mislabeling sources are considered and addressed by utilizing stereo images, such as relief displacement, different representation between the base map and the image, and occlusion areas in the image. These free training samples are then fed to a pre-trained FCN. To find the better result, we applied fine-tuning with different learning rates and freezing different layers. We further improved the results by introducing atrous convolution. By using free training samples, we achieve a promising building classification with 85.6<span class="thinspace"></span>% overall accuracy and 83.77<span class="thinspace"></span>% F1 score, while the result from ISPRS benchmark by using manual labels has 92.02<span class="thinspace"></span>% overall accuracy and 84.06<span class="thinspace"></span>% F1 score, due to the building complexities in our study area.</p>


2019 ◽  
Vol 16 (9) ◽  
pp. 4044-4052 ◽  
Author(s):  
Rohini Goel ◽  
Avinash Sharma ◽  
Rajiv Kapoor

The deep learning approaches have drawn much focus of the researchers in the area of object recognition because of their implicit strength of conquering the shortcomings of classical approaches dependent on hand crafted features. In the last few years, the deep learning techniques have been made many developments in object recognition. This paper indicates some recent and efficient deep learning frameworks for object recognition. The up to date study on recently developed a deep neural network based object recognition methods is presented. The various benchmark datasets that are used for performance evaluation are also discussed. The applications of the object recognition approach for specific types of objects (like faces, buildings, plants etc.) are also highlighted. We conclude up with the merits and demerits of existing methods and future scope in this area.


Sign in / Sign up

Export Citation Format

Share Document