scholarly journals On The Power of Deep But Naive Partial Label Learning

Author(s):  
Junghoon Seo ◽  
Joon Suk Huh
Author(s):  
Haobo Wang ◽  
Yuzhou Qiang ◽  
Chen Chen ◽  
Weiwei Liu ◽  
Tianlei Hu ◽  
...  

Author(s):  
Gengyu Lyu ◽  
Songhe Feng ◽  
Tao Wang ◽  
Congyan Lang ◽  
Yidong Li

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chaofan Hu ◽  
Zhichao Zhou ◽  
Biao Wang ◽  
WeiGuang Zheng ◽  
Shuilong He

A new tensor transfer approach is proposed for rotating machinery intelligent fault diagnosis with semisupervised partial label learning in this paper. Firstly, the vibration signals are constructed as a three-way tensor via trial, condition, and channel. Secondly, for adapting the source and target domains tensor representations directly, without vectorization, the domain adaptation (DA) approach named tensor-aligned invariant subspace learning (TAISL) is first proposed for tensor representation when testing and training data are drawn from different distribution. Then, semisupervised partial label learning (SSPLL) is first introduced for tackling a problem that it is hard to label a large number of instances and there exists much data left to be unlabeled. Ultimately, the proposed method is used to identify faults. The effectiveness and feasibility of the proposed method has been thoroughly validated by transfer fault experiments. The experimental results show that the presented technique can achieve better performance.


Author(s):  
Lei Feng ◽  
Bo An

Partial label learning is a weakly supervised learning framework, in which each instance is provided with multiple candidate labels while only one of them is correct. Most of the existing approaches focus on leveraging the instance relationships to disambiguate the given noisy label space, while it is still unclear whether we can exploit potentially useful information in label space to alleviate the label ambiguities. This paper gives a positive answer to this question for the first time. Specifically, if two instances do not share any common candidate labels, they cannot have the same ground-truth label. By exploiting such dissimilarity relationships from label space, we propose a novel approach that aims to maximize the latent semantic differences of the two instances whose ground-truth labels are definitely different, while training the desired model simultaneously, thereby continually enlarging the gap of label confidences between two instances of different classes. Extensive experiments on artificial and real-world partial label datasets show that our approach significantly outperforms state-of-the-art counterparts.


Author(s):  
Houjie Li ◽  
Lei Wu ◽  
Jianjun He ◽  
Ruirui Zheng ◽  
Yu Zhou ◽  
...  

The ambiguity of training samples in the partial label learning framework makes it difficult for us to develop learning algorithms and most of the existing algorithms are proposed based on the traditional shallow machine learn- ing models, such as decision tree, support vector machine, and Gaussian process model. Deep neu- ral networks have demonstrated excellent perfor- mance in many application fields, but currently it is rarely used for partial label learning frame- work. This study proposes a new partial label learning algorithm based on a fully connected deep neural network, in which the relationship between the candidate labels and the ground- truth label of each training sample is established by defining three new loss functions, and a regu- larization term is added to prevent overfitting. The experimental results on the controlled U- CI datasets and real-world partial label datasets reveal that the proposed algorithm can achieve higher classification accuracy than the state-of- the-art partial label learning algorithms.


Author(s):  
Lei Feng ◽  
Bo An

Partial label learning deals with the problem where each training instance is assigned a set of candidate labels, only one of which is correct. This paper provides the first attempt to leverage the idea of self-training for dealing with partially labeled examples. Specifically, we propose a unified formulation with proper constraints to train the desired model and perform pseudo-labeling jointly. For pseudo-labeling, unlike traditional self-training that manually differentiates the ground-truth label with enough high confidence, we introduce the maximum infinity norm regularization on the modeling outputs to automatically achieve this consideratum, which results in a convex-concave optimization problem. We show that optimizing this convex-concave problem is equivalent to solving a set of quadratic programming (QP) problems. By proposing an upper-bound surrogate objective function, we turn to solving only one QP problem for improving the optimization efficiency. Extensive experiments on synthesized and real-world datasets demonstrate that the proposed approach significantly outperforms the state-of-the-art partial label learning approaches.


Author(s):  
Ning Xu ◽  
Jiaqi Lv ◽  
Xin Geng

Partial label learning aims to learn from training examples each associated with a set of candidate labels, among which only one label is valid for the training example. The common strategy to induce predictive model is trying to disambiguate the candidate label set, such as disambiguation by identifying the ground-truth label iteratively or disambiguation by treating each candidate label equally. Nonetheless, these strategies ignore considering the generalized label distribution corresponding to each instance since the generalized label distribution is not explicitly available in the training set. In this paper, a new partial label learning strategy named PL-LE is proposed to learn from partial label examples via label enhancement. Specifically, the generalized label distributions are recovered by leveraging the topological information of the feature space. After that, a multi-class predictive model is learned by fitting a regularized multi-output regressor with the generalized label distributions. Extensive experiments show that PL-LE performs favorably against state-ofthe-art partial label learning approaches.


Sign in / Sign up

Export Citation Format

Share Document