scholarly journals Deep metric attention learning for skin lesion classification in dermoscopy images

Author(s):  
Xiaoyu He ◽  
Yong Wang ◽  
Shuang Zhao ◽  
Chunli Yao

AbstractCurrently, convolutional neural networks (CNNs) have made remarkable achievements in skin lesion classification because of their end-to-end feature representation abilities. However, precise skin lesion classification is still challenging because of the following three issues: (1) insufficient training samples, (2) inter-class similarities and intra-class variations, and (3) lack of the ability to focus on discriminative skin lesion parts. To address these issues, we propose a deep metric attention learning CNN (DeMAL-CNN) for skin lesion classification. In DeMAL-CNN, a triplet-based network (TPN) is first designed based on deep metric learning, which consists of three weight-shared embedding extraction networks. TPN adopts a triplet of samples as input and uses the triplet loss to optimize the embeddings, which can not only increase the number of training samples, but also learn the embeddings robust to inter-class similarities and intra-class variations. In addition, a mixed attention mechanism considering both the spatial-wise and channel-wise attention information is designed and integrated into the construction of each embedding extraction network, which can further strengthen the skin lesion localization ability of DeMAL-CNN. After extracting the embeddings, three weight-shared classification layers are used to generate the final predictions. In the training procedure, we combine the triplet loss with the classification loss as a hybrid loss to train DeMAL-CNN. We compare DeMAL-CNN with the baseline method, attention methods, advanced challenge methods, and state-of-the-art skin lesion classification methods on the ISIC 2016 and ISIC 2017 datasets, and test its generalization ability on the PH2 dataset. The results demonstrate its effectiveness.

2020 ◽  
Vol 10 (2) ◽  
pp. 615 ◽  
Author(s):  
Tomas Iesmantas ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

(1) Background: The segmentation of cell nuclei is an essential task in a wide range of biomedical studies and clinical practices. The full automation of this process remains a challenge due to intra- and internuclear variations across a wide range of tissue morphologies, differences in staining protocols and imaging procedures. (2) Methods: A deep learning model with metric embeddings such as contrastive loss and triplet loss with semi-hard negative mining is proposed in order to accurately segment cell nuclei in a diverse set of microscopy images. The effectiveness of the proposed model was tested on a large-scale multi-tissue collection of microscopy image sets. (3) Results: The use of deep metric learning increased the overall segmentation prediction by 3.12% in the average value of Dice similarity coefficients as compared to no metric learning. In particular, the largest gain was observed for segmenting cell nuclei in H&E -stained images when deep learning network and triplet loss with semi-hard negative mining were considered for the task. (4) Conclusion: We conclude that deep metric learning gives an additional boost to the overall learning process and consequently improves the segmentation performance. Notably, the improvement ranges approximately between 0.13% and 22.31% for different types of images in the terms of Dice coefficients when compared to no metric deep learning.


Author(s):  
Xinshao Wang ◽  
Yang Hua ◽  
Elyor Kodirov ◽  
Guosheng Hu ◽  
Neil M. Robertson

Deep metric learning aims to learn a deep embedding that can capture the semantic similarity of data points. Given the availability of massive training samples, deep metric learning is known to suffer from slow convergence due to a large fraction of trivial samples. Therefore, most existing methods generally resort to sample mining strategies for selecting nontrivial samples to accelerate convergence and improve performance. In this work, we identify two critical limitations of the sample mining methods, and provide solutions for both of them. First, previous mining methods assign one binary score to each sample, i.e., dropping or keeping it, so they only selects a subset of relevant samples in a mini-batch. Therefore, we propose a novel sample mining method, called Online Soft Mining (OSM), which assigns one continuous score to each sample to make use of all samples in the mini-batch. OSM learns extended manifolds that preserve useful intraclass variances by focusing on more similar positives. Second, the existing methods are easily influenced by outliers as they are generally included in the mined subset. To address this, we introduce Class-Aware Attention (CAA) that assigns little attention to abnormal data samples. Furthermore, by combining OSM and CAA, we propose a novel weighted contrastive loss to learn discriminative embeddings. Extensive experiments on two fine-grained visual categorisation datasets and two video-based person re-identification benchmarks show that our method significantly outperforms the state-of-the-art.


2019 ◽  
Vol 11 (1) ◽  
pp. 76 ◽  
Author(s):  
Zhiqiang Gong ◽  
Ping Zhong ◽  
Weidong Hu ◽  
Yuming Hua

Deep learning methods, especially convolutional neural networks (CNNs), have shown remarkable ability for remote sensing scene classification. However, the traditional training process of standard CNNs only takes the point-wise penalization of the training samples into consideration, which usually makes the learned CNNs sub-optimal especially for remote sensing scenes with large intra-class variance and low inter-class variance. To address this problem, deep metric learning, which incorporates the metric learning into the deep model, is used to maximize the inter-class variance and minimize the intra-class variance for better representation. This work introduces structured metric learning for remote sensing scene representation, a special deep metric learning which can take full advantage of the training batch. However, the deep metrics only consider the pairwise correlation between the training samples, and ignores the classwise correlation from the class view. To take the classwise penalization into consideration, this work defines the center points of the learned features of each class in the training process to represent the class. Through increasing the variance between different center points and decreasing the variance between the learned features from each class and the corresponding center point, the representational ability can be further improved. Therefore, this work develops a novel center-based structured metric learning to take advantage of both the deep metrics and the center points. Finally, joint supervision of the cross-entropy loss and the center-based structured metric learning is developed for the land-use classification in remote sensing. It can joint learn the center points and the deep metrics to take advantage of the point-wise, the pairwise, and the classwise correlation. Experiments are conducted over three real-world remote sensing scene datasets, namely UC Merced Land-Use dataset, Brazilian Coffee Scene dataset, and Google dataset. The classification performance can achieve 97.30%, 91.24%, and 92.04% with the proposed method over the three datasets which are better than other state-of-the-art methods under the same experimental setups. The results demonstrate that the proposed method can improve the representational ability for the remote sensing scenes.


Author(s):  
Xiawu Zheng ◽  
Rongrong Ji ◽  
Xiaoshuai Sun ◽  
Baochang Zhang ◽  
Yongjian Wu ◽  
...  

Recent advances on fine-grained image retrieval prefer learning convolutional neural network (CNN) with specific fullyconnect layer designed loss function for discriminative feature representation. Essentially, such loss should establish a robust metric to efficiently distinguish high-dimensional features within and outside fine-grained categories. To this end, the existing loss functions are defected in two aspects: (a) The feature relationship is encoded inside the training batch. Such a local scope leads to low accuracy. (b) The error is established by the mean square, which needs pairwise distance computation in training set and results in low efficiency. In this paper, we propose a novel metric learning scheme, termed Normalize-Scale Layer and Decorrelated Global Centralized Ranking Loss, which achieves extremely efficient and discriminative learning, i.e., 5× speedup over triplet loss and 12% recall boost on CARS196. Our method originates from the classic softmax loss, which has a global structure but does not directly optimize the distance metric as well as the inter/intra class distance. We tackle this issue through a hypersphere layer and a global centralized ranking loss with a pairwise decorrelated learning. In particular, we first propose a Normalize-Scale Layer to eliminate the gap between metric distance (for measuring distance in retrieval) and dot product (for dimension reduction in classification). Second, the relationship between features is encoded under a global centralized ranking loss, which targets at optimizing metric distance globally and accelerating learning procedure. Finally, the centers are further decorrelated by Gram-Schmidt process, leading to extreme efficiency (with 20 epochs in training procedure) and discriminability in feature learning. We have conducted quantitative evaluations on two fine-grained retrieval benchmark. The superior performance demonstrates the merits of the proposed approach over the state-of-the-arts.


IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 68089-68095 ◽  
Author(s):  
Min Chen ◽  
Yongxin Ge ◽  
Xin Feng ◽  
Chuanyun Xu ◽  
Dan Yang

Author(s):  
Weifeng Ge ◽  
Weilin Huang ◽  
Dengke Dong ◽  
Matthew R. Scott

Sign in / Sign up

Export Citation Format

Share Document