partial label learning
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 35)

H-INDEX

7
(FIVE YEARS 4)

2022 ◽  
Vol 16 (4) ◽  
pp. 1-18
Author(s):  
Min-Ling Zhang ◽  
Jing-Han Wu ◽  
Wei-Xuan Bao

As an emerging weakly supervised learning framework, partial label learning considers inaccurate supervision where each training example is associated with multiple candidate labels among which only one is valid. In this article, a first attempt toward employing dimensionality reduction to help improve the generalization performance of partial label learning system is investigated. Specifically, the popular linear discriminant analysis (LDA) techniques are endowed with the ability of dealing with partial label training examples. To tackle the challenge of unknown ground-truth labeling information, a novel learning approach named Delin is proposed which alternates between LDA dimensionality reduction and candidate label disambiguation based on estimated labeling confidences over candidate labels. On one hand, the (kernelized) projection matrix of LDA is optimized by utilizing disambiguation-guided labeling confidences. On the other hand, the labeling confidences are disambiguated by resorting to k NN aggregation in the LDA-induced feature space. Extensive experiments over a broad range of partial label datasets clearly validate the effectiveness of Delin in improving the generalization performance of well-established partial label learning algorithms.


Author(s):  
Houjie Li ◽  
Lei Wu ◽  
Jianjun He ◽  
Ruirui Zheng ◽  
Yu Zhou ◽  
...  

The ambiguity of training samples in the partial label learning framework makes it difficult for us to develop learning algorithms and most of the existing algorithms are proposed based on the traditional shallow machine learn- ing models, such as decision tree, support vector machine, and Gaussian process model. Deep neu- ral networks have demonstrated excellent perfor- mance in many application fields, but currently it is rarely used for partial label learning frame- work. This study proposes a new partial label learning algorithm based on a fully connected deep neural network, in which the relationship between the candidate labels and the ground- truth label of each training sample is established by defining three new loss functions, and a regu- larization term is added to prevent overfitting. The experimental results on the controlled U- CI datasets and real-world partial label datasets reveal that the proposed algorithm can achieve higher classification accuracy than the state-of- the-art partial label learning algorithms.


Author(s):  
Houjie Li ◽  
Min Yang ◽  
Yu Zhou ◽  
Ruirui Zheng ◽  
Wenpeng Liu ◽  
...  

Partial label learning is a new weak- ly supervised learning framework. In this frame- work, the real category label of a training sample is usually concealed in a set of candidate labels, which will lead to lower accuracy of learning al- gorithms compared with traditional strong super- vised cases. Recently, it has been found that met- ric learning technology can be used to improve the accuracy of partial label learning algorithm- s. However, because it is difficult to ascertain similar pairs from training samples, at present there are few metric learning algorithms for par- tial label learning framework. In view of this, this paper proposes a similar pair-free partial la- bel metric learning algorithm. The main idea of the algorithm is to define two probability distri- butions on the training samples, i.e., the proba- bility distribution determined by the distance of sample pairs and the probability distribution de- termined by the similarity of candidate label set of sample pairs, and then the metric matrix is ob- tained via minimizing the KL divergence of the two probability distributions. The experimental results on several real-world partial label dataset- s show that the proposed algorithm can improve the accuracy of k-nearest neighbor partial label learning algorithm (PL-KNN) better than the ex- isting partial label metric learning algorithms, up to 8 percentage points.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chaofan Hu ◽  
Zhichao Zhou ◽  
Biao Wang ◽  
WeiGuang Zheng ◽  
Shuilong He

A new tensor transfer approach is proposed for rotating machinery intelligent fault diagnosis with semisupervised partial label learning in this paper. Firstly, the vibration signals are constructed as a three-way tensor via trial, condition, and channel. Secondly, for adapting the source and target domains tensor representations directly, without vectorization, the domain adaptation (DA) approach named tensor-aligned invariant subspace learning (TAISL) is first proposed for tensor representation when testing and training data are drawn from different distribution. Then, semisupervised partial label learning (SSPLL) is first introduced for tackling a problem that it is hard to label a large number of instances and there exists much data left to be unlabeled. Ultimately, the proposed method is used to identify faults. The effectiveness and feasibility of the proposed method has been thoroughly validated by transfer fault experiments. The experimental results show that the presented technique can achieve better performance.


Author(s):  
Guang-Yi Lin ◽  
Zi-Yang Xiao ◽  
Jia-Tong Liu ◽  
Bei-Zhan Wang ◽  
Kun-Hong Liu ◽  
...  

Author(s):  
Yunfeng Zhao ◽  
Guoxian Yu ◽  
Lei Liu ◽  
Zhongmin Yan ◽  
Lizhen Cui ◽  
...  

Partial-label learning (PLL) generally focuses on inducing a noise-tolerant multi-class classifier by training on overly-annotated samples, each of which is annotated with a set of labels, but only one is the valid label. A basic promise of existing PLL solutions is that there are sufficient partial-label (PL) samples for training. However, it is more common than not to have just few PL samples at hand when dealing with new tasks. Furthermore, existing few-shot learning algorithms assume precise labels of the support set; as such, irrelevant labels may seriously mislead the meta-learner and thus lead to a compromised performance. How to enable PLL under a few-shot learning setting is an important problem, but not yet well studied. In this paper, we introduce an approach called FsPLL (Few-shot PLL). FsPLL first performs adaptive distance metric learning by an embedding network and rectifying prototypes on the tasks previously encountered. Next, it calculates the prototype of each class of a new task in the embedding network. An unseen example can then be classified via its distance to each prototype. Experimental results on widely-used few-shot datasets demonstrate that our FsPLL can achieve a superior performance than the state-of-the-art methods, and it needs fewer samples for quickly adapting to new tasks.


Author(s):  
Yan Yan ◽  
Yuhong Guo

Partial label (PL) learning tackles the problem where each training instance is associated with a set of candidate labels that include both the true label and some irrelevant noise labels. In this paper, we propose a novel multi-level generative model for partial label learning (MGPLL), which tackles the PL problem by learning both a label level adversarial generator and a feature level adversarial generator under a bi-directional mapping framework between the label vectors and the data samples. MGPLL uses a conditional noise label generation network to model the non-random noise labels and perform label denoising, and uses a multi-class predictor to map the training instances to the denoised label vectors, while a conditional data feature generator is used to form an inverse mapping from the denoised label vectors to data samples. Both the noise label generator and the data feature generator are learned in an adversarial manner to match the observed candidate labels and data features respectively. We conduct extensive experiments on both synthesized and real-world partial label datasets. The proposed approach demonstrates the state-of-the-art performance for partial label learning.


Sign in / Sign up

Export Citation Format

Share Document