scholarly journals Compressed Self-Attention for Deep Metric Learning

2020 ◽  
Vol 34 (04) ◽  
pp. 3561-3568
Author(s):  
Ziye Chen ◽  
Mingming Gong ◽  
Yanwu Xu ◽  
Chaohui Wang ◽  
Kun Zhang ◽  
...  

In this paper, we aim to enhance self-attention (SA) mechanism for deep metric learning in visual perception, by capturing richer contextual dependencies in visual data. To this end, we propose a novel module, named compressed self-attention (CSA), which significantly reduces the computation and memory cost with a neglectable decrease in accuracy with respect to the original SA mechanism, thanks to the following two characteristics: i) it only needs to compute a small number of base attention maps for a small number of base feature vectors; and ii) the output at each spatial location can be simply obtained by an adaptive weighted average of the outputs calculated from the base attention maps. The high computational efficiency of CSA enables the application to high-resolution shallow layers in convolutional neural networks with little additional cost. In addition, CSA makes it practical to further partition the feature maps into groups along the channel dimension and compute attention maps for features in each group separately, thus increasing the diversity of long-range dependencies and accordingly boosting the accuracy. We evaluate the performance of CSA via extensive experiments on two metric learning tasks: person re-identification and local descriptor learning. Qualitative and quantitative comparisons with latest methods demonstrate the significance of CSA in this topic.

Author(s):  
Chethana Hadya Thammaiah ◽  
Trisiladevi Chandrakant Nagavi

<span>The human face can be used as an identification and authentication tool in biometric systems. Face recognition in forensics is a challenging task due to the presence of partial occlusion features like wearing a hat, sunglasses, scarf, and beard. In forensics, criminal identification having partial occlusion features is the most difficult task to perform. In this paper, a combination of the histogram of gradients (HOG) with Euclidean distance is proposed. Deep metric learning is the process of measuring the similarity between the samples using optimal distance metrics for learning tasks. In the proposed system, a deep metric learning technique like HOG is used to generate a 128d real feature vector. Euclidean distance is then applied between the feature vectors and a tolerance threshold is set to decide whether it is a match or mismatch. Experiments are carried out on disguised faces in the wild (DFW) dataset collected from IIIT Delhi which consists of 1000 subjects in which 600 subjects were used for testing and the remaining 400 subjects were used for training purposes. The proposed system provides a recognition accuracy of 89.8% and it outperforms compared with other existing methods.</span>


Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1066 ◽  
Author(s):  
Kaya ◽  
Bilge

Metric learning aims to measure the similarity among samples while using an optimal distance metric for learning tasks. Metric learning methods, which generally use a linear projection, are limited in solving real-world problems demonstrating non-linear characteristics. Kernel approaches are utilized in metric learning to address this problem. In recent years, deep metric learning, which provides a better solution for nonlinear data through activation functions, has attracted researchers' attention in many different areas. This article aims to reveal the importance of deep metric learning and the problems dealt with in this field in the light of recent studies. As far as the research conducted in this field are concerned, most existing studies that are inspired by Siamese and Triplet networks are commonly used to correlate among samples while using shared weights in deep metric learning. The success of these networks is based on their capacity to understand the similarity relationship among samples. Moreover, sampling strategy, appropriate distance metric, and the structure of the network are the challenging factors for researchers to improve the performance of the network model. This article is considered to be important, as it is the first comprehensive study in which these factors are systematically analyzed and evaluated as a whole and supported by comparing the quantitative results of the methods.


2020 ◽  
Author(s):  
Yuki Takashima ◽  
Ryoichi Takashima ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Author(s):  
Xinshao Wang ◽  
Yang Hua ◽  
Elyor Kodirov ◽  
Neil M Robertson

Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 572
Author(s):  
Alan M. Luu ◽  
Jacob R. Leistico ◽  
Tim Miller ◽  
Somang Kim ◽  
Jun S. Song

Understanding the recognition of specific epitopes by cytotoxic T cells is a central problem in immunology. Although predicting binding between peptides and the class I Major Histocompatibility Complex (MHC) has had success, predicting interactions between T cell receptors (TCRs) and MHC class I-peptide complexes (pMHC) remains elusive. This paper utilizes a convolutional neural network model employing deep metric learning and multimodal learning to perform two critical tasks in TCR-epitope binding prediction: identifying the TCRs that bind a given epitope from a TCR repertoire, and identifying the binding epitope of a given TCR from a list of candidate epitopes. Our model can perform both tasks simultaneously and reveals that inconsistent preprocessing of TCR sequences can confound binding prediction. Applying a neural network interpretation method identifies key amino acid sequence patterns and positions within the TCR, important for binding specificity. Contrary to common assumption, known crystal structures of TCR-pMHC complexes show that the predicted salient amino acid positions are not necessarily the closest to the epitopes, implying that physical proximity may not be a good proxy for importance in determining TCR-epitope specificity. Our work thus provides an insight into the learned predictive features of TCR-epitope binding specificity and advances the associated classification tasks.


2021 ◽  
pp. 1-13
Author(s):  
Kai Zhuang ◽  
Sen Wu ◽  
Xiaonan Gao

To deal with the systematic risk of financial institutions and the rapid increasing of loan applications, it is becoming extremely important to automatically predict the default probability of a loan. However, this task is non-trivial due to the insufficient default samples, hard decision boundaries and numerous heterogeneous features. To the best of our knowledge, existing related researches fail in handling these three difficulties simultaneously. In this paper, we propose a weakly supervised loan default prediction model WEAKLOAN that systematically solves all these challenges based on deep metric learning. WEAKLOAN is composed of three key modules which are used for encoding loan features, learning evaluation metrics and calculating default risk scores. By doing so, WEAKLOAN can not only extract the features of a loan itself, but also model the hidden relationships in loan pairs. Extensive experiments on real-life datasets show that WEAKLOAN significantly outperforms all compared baselines even though the default loans for training are limited.


2021 ◽  
Vol 185 ◽  
pp. 106133
Author(s):  
William Andrew ◽  
Jing Gao ◽  
Siobhan Mullan ◽  
Neill Campbell ◽  
Andrew W. Dowsey ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document