Refined Residual Deep Convolutional Network for Skin Lesion Classification

Author(s):  
Khalid M. Hosny ◽  
Mohamed A. Kassem
Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7080
Author(s):  
Jing Wu ◽  
Wei Hu ◽  
Yuan Wen ◽  
Wenli Tu ◽  
Xiaoming Liu

Skin lesion classification is an effective approach aided by computer vision for the diagnosis of skin cancer. Though deep learning models presented advantages over traditional methods and brought tremendous breakthroughs, a precise diagnosis is still challenging because of the intra-class variation and inter-class similarity caused by the diversity of imaging methods and clinicopathology. In this paper, we propose a densely connected convolutional network with an attention and residual learning (ARDT-DenseNet) method for skin lesion classification. Each ARDT block consists of dense blocks, transition blocks and attention and residual modules. Compared to a residual network with the same number of convolutional layers, the size of the parameters of the densely connected network proposed in this paper has been reduced by half, while the accuracy of skin lesion classification is preserved. Our improved densely connected network adds an attention mechanism and residual learning after each dense block and transition block without introducing additional parameters. We evaluate the ARDT-DenseNet model with the ISIC 2016 and ISIC 2017 datasets. Our method achieves an ACC of 85.7% and an AUC of 83.7% in skin lesion classification with ISIC 2016 and an average AUC of 91.8% in skin lesion classification with ISIC 2017. The experimental results show that the method proposed in this paper has achieved a significant improvement in skin lesion classification, which is superior to that of the state-of-the-art method.


Author(s):  
Syed Rahat Hassan ◽  
Syed Rahat Hassan ◽  
Shyla Afroge ◽  
Shyla Afroge ◽  
Mehera Binte Mizan ◽  
...  

2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Daniel M. Lima ◽  
Jose F. Rodrigues-Jr ◽  
Bruno Brandoli ◽  
Lorraine Goeuriot ◽  
Sihem Amer-Yahia

2021 ◽  
Vol 11 (4) ◽  
pp. 1428
Author(s):  
Haopeng Wu ◽  
Zhiying Lu ◽  
Jianfeng Zhang ◽  
Xin Li ◽  
Mingyue Zhao ◽  
...  

This paper addresses the problem of Facial Expression Recognition (FER), focusing on unobvious facial movements. Traditional methods often cause overfitting problems or incomplete information due to insufficient data and manual selection of features. Instead, our proposed network, which is called the Multi-features Cooperative Deep Convolutional Network (MC-DCN), maintains focus on the overall feature of the face and the trend of key parts. The processing of video data is the first stage. The method of ensemble of regression trees (ERT) is used to obtain the overall contour of the face. Then, the attention model is used to pick up the parts of face that are more susceptible to expressions. Under the combined effect of these two methods, the image which can be called a local feature map is obtained. After that, the video data are sent to MC-DCN, containing parallel sub-networks. While the overall spatiotemporal characteristics of facial expressions are obtained through the sequence of images, the selection of keys parts can better learn the changes in facial expressions brought about by subtle facial movements. By combining local features and global features, the proposed method can acquire more information, leading to better performance. The experimental results show that MC-DCN can achieve recognition rates of 95%, 78.6% and 78.3% on the three datasets SAVEE, MMI, and edited GEMEP, respectively.


2020 ◽  
Vol 14 (4) ◽  
pp. 720-726 ◽  
Author(s):  
Sertan Serte ◽  
Hasan Demirel

Sign in / Sign up

Export Citation Format

Share Document