An Overview and Empirical Comparison of Distance Metric Learning Methods

2017 ◽  
Vol 47 (3) ◽  
pp. 612-625 ◽  
Author(s):  
Panagiotis Moutafis ◽  
Mengjun Leng ◽  
Ioannis A. Kakadiaris
Author(s):  
Han-Jia Ye ◽  
De-Chuan Zhan ◽  
Xue-Min Si ◽  
Yuan Jiang

Mahalanobis distance metric takes feature weights and correlation into account in the distance computation, which can improve the performance of many similarity/dissimilarity based methods, such as kNN. Most existing distance metric learning methods obtain metric based on the raw features and side information but neglect the reliability of them. Noises or disturbances on instances will make changes on their relationships, so as to affect the learned metric.In this paper, we claim that considering disturbance of instances may help the distance metric learning approach get a robust metric, and propose the Distance metRIc learning Facilitated by disTurbances (DRIFT) approach. In DRIFT, the noise or the disturbance of each instance is learned. Therefore, the distance between each pair of (noisy) instances can be better estimated, which facilitates side information utilization and metric learning.Experiments on prediction and visualization clearly indicate the effectiveness of the proposed approach.


2020 ◽  
Vol 34 (04) ◽  
pp. 3834-3841
Author(s):  
Ujjal Kr Dutta ◽  
Mehrtash Harandi ◽  
C. Chandra Sekhar

Distance Metric Learning (DML) involves learning an embedding that brings similar examples closer while moving away dissimilar ones. Existing DML approaches make use of class labels to generate constraints for metric learning. In this paper, we address the less-studied problem of learning a metric in an unsupervised manner. We do not make use of class labels, but use unlabeled data to generate adversarial, synthetic constraints for learning a metric inducing embedding. Being a measure of uncertainty, we minimize the entropy of a conditional probability to learn the metric. Our stochastic formulation scales well to large datasets, and performs competitive to existing metric learning methods.


2021 ◽  
Author(s):  
Tomoki Yoshida ◽  
Ichiro Takeuchi ◽  
Masayuki Karasuyama

2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Wei Yang ◽  
Luhui Xu ◽  
Xiaopan Chen ◽  
Fengbin Zheng ◽  
Yang Liu

Learning a proper distance metric for histogram data plays a crucial role in many computer vision tasks. The chi-squared distance is a nonlinear metric and is widely used to compare histograms. In this paper, we show how to learn a general form of chi-squared distance based on the nearest neighbor model. In our method, the margin of sample is first defined with respect to the nearest hits (nearest neighbors from the same class) and the nearest misses (nearest neighbors from the different classes), and then the simplex-preserving linear transformation is trained by maximizing the margin while minimizing the distance between each sample and its nearest hits. With the iterative projected gradient method for optimization, we naturally introduce thel2,1norm regularization into the proposed method for sparse metric learning. Comparative studies with the state-of-the-art approaches on five real-world datasets verify the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document