Comparison of distance metric learning methods against label noise for fine-grained recognition

Author(s):  
Alper Kayabasi ◽  
Kaan Karaman ◽  
Ibrahim Batuhan Akkaya
Author(s):  
Han-Jia Ye ◽  
De-Chuan Zhan ◽  
Xue-Min Si ◽  
Yuan Jiang

Mahalanobis distance metric takes feature weights and correlation into account in the distance computation, which can improve the performance of many similarity/dissimilarity based methods, such as kNN. Most existing distance metric learning methods obtain metric based on the raw features and side information but neglect the reliability of them. Noises or disturbances on instances will make changes on their relationships, so as to affect the learned metric.In this paper, we claim that considering disturbance of instances may help the distance metric learning approach get a robust metric, and propose the Distance metRIc learning Facilitated by disTurbances (DRIFT) approach. In DRIFT, the noise or the disturbance of each instance is learned. Therefore, the distance between each pair of (noisy) instances can be better estimated, which facilitates side information utilization and metric learning.Experiments on prediction and visualization clearly indicate the effectiveness of the proposed approach.


2017 ◽  
Vol 47 (3) ◽  
pp. 612-625 ◽  
Author(s):  
Panagiotis Moutafis ◽  
Mengjun Leng ◽  
Ioannis A. Kakadiaris

2019 ◽  
Vol 330 ◽  
pp. 138-150 ◽  
Author(s):  
Fanxia Zeng ◽  
Wensheng Zhang ◽  
Siheng Zhang ◽  
Nan Zheng

2020 ◽  
Vol 34 (04) ◽  
pp. 3834-3841
Author(s):  
Ujjal Kr Dutta ◽  
Mehrtash Harandi ◽  
C. Chandra Sekhar

Distance Metric Learning (DML) involves learning an embedding that brings similar examples closer while moving away dissimilar ones. Existing DML approaches make use of class labels to generate constraints for metric learning. In this paper, we address the less-studied problem of learning a metric in an unsupervised manner. We do not make use of class labels, but use unlabeled data to generate adversarial, synthetic constraints for learning a metric inducing embedding. Being a measure of uncertainty, we minimize the entropy of a conditional probability to learn the metric. Our stochastic formulation scales well to large datasets, and performs competitive to existing metric learning methods.


2021 ◽  
Author(s):  
Tomoki Yoshida ◽  
Ichiro Takeuchi ◽  
Masayuki Karasuyama

Sign in / Sign up

Export Citation Format

Share Document