Nasopharyngeal carcinoma lesion extraction using clustering via semi-supervised metric learning with side-information

Author(s):  
Wei Huang ◽  
Kap Luk Chan ◽  
Yan Gao ◽  
V. Chong
Author(s):  
Xiaohong Deng ◽  
◽  
Yunbin Chen ◽  
Jinsheng Hong ◽  
Zhongshi Du ◽  
...  

Author(s):  
Han-Jia Ye ◽  
De-Chuan Zhan ◽  
Xue-Min Si ◽  
Yuan Jiang

Mahalanobis distance metric takes feature weights and correlation into account in the distance computation, which can improve the performance of many similarity/dissimilarity based methods, such as kNN. Most existing distance metric learning methods obtain metric based on the raw features and side information but neglect the reliability of them. Noises or disturbances on instances will make changes on their relationships, so as to affect the learned metric.In this paper, we claim that considering disturbance of instances may help the distance metric learning approach get a robust metric, and propose the Distance metRIc learning Facilitated by disTurbances (DRIFT) approach. In DRIFT, the noise or the disturbance of each instance is learned. Therefore, the distance between each pair of (noisy) instances can be better estimated, which facilitates side information utilization and metric learning.Experiments on prediction and visualization clearly indicate the effectiveness of the proposed approach.


2011 ◽  
Vol 2 (2) ◽  
pp. 1-28 ◽  
Author(s):  
Lei Wu ◽  
Steven C.H. Hoi ◽  
Rong Jin ◽  
Jianke Zhu ◽  
Nenghai Yu

Author(s):  
Kai Liu ◽  
Lodewijk Brand ◽  
Hua Wang ◽  
Feiping Nie

Metric Learning, which aims at learning a distance metric for a given data set, plays an important role in measuring the distance or similarity between data objects. Due to its broad usefulness, it has attracted a lot of interest in machine learning and related areas in the past few decades. This paper proposes to learn the distance metric from the side information in the forms of must-links and cannot-links. Given the pairwise constraints, our goal is to learn a Mahalanobis distance that minimizes the ratio of the distances of the data pairs in the must-links to those in the cannot-links. Different from many existing papers that use the traditional squared L2-norm distance, we develop a robust model that is less sensitive to data noise or outliers by using the not-squared L2-norm distance. In our objective, the orthonormal constraint is enforced to avoid degenerate solutions. To solve our objective, we have derived an efficient iterative solution algorithm. We have conducted extensive experiments, which demonstrated the superiority of our method over state-of-the-art.


2013 ◽  
Vol 22 (03) ◽  
pp. 1350013 ◽  
Author(s):  
OUIEM BCHIR ◽  
HICHEM FRIGUI ◽  
MOHAMED MAHER BEN ISMAIL

Many machine learning applications rely on learning distance functions with side information. Most of these distance metric learning approaches learns a Mahalanobis distance. While these approaches may work well when data is in low dimensionality, they become computationally expensive or even infeasible for high dimensional data. In this paper, we propose a novel method of learning nonlinear distance functions with side information while clustering the data. The new semi-supervised clustering approach is called Semi-Supervised Fuzzy clustering with Learnable Cluster dependent Kernels (SS-FLeCK). The proposed algorithm learns the underlying cluster-dependent dissimilarity measure while finding compact clusters in the given data set. The learned dissimilarity is based on a Gaussian kernel function with cluster dependent parameters. This objective function integrates penalty and reward cost functions. These cost functions are weighted by fuzzy membership degrees. Moreover, they use side-information in the form of a small set of constraints on which instances should or should not reside in the same cluster. The proposed algorithm uses only the pairwise relation between the feature vectors. This makes it applicable when similar objects cannot be represented by a single prototype. Using synthetic and real data sets, we show that SS-FLeCK outperforms several other algorithms.


2014 ◽  
Vol 24 (11) ◽  
pp. 2642-2655
Author(s):  
Peng-Cheng ZOU ◽  
Jian-Dong WANG ◽  
Guo-Qing YANG ◽  
Xia ZHANG ◽  
Li-Na WANG

Sign in / Sign up

Export Citation Format

Share Document