A framework for evaluation in learning from label proportions

2019 ◽  
Vol 8 (3) ◽  
pp. 359-373
Author(s):  
Jerónimo Hernández-González
2020 ◽  
Vol 128 ◽  
pp. 73-81
Author(s):  
Yong Shi ◽  
Jiabin Liu ◽  
Bo Wang ◽  
Zhiquan Qi ◽  
YingJie Tian

PLoS ONE ◽  
2017 ◽  
Vol 12 (4) ◽  
pp. e0175856 ◽  
Author(s):  
David Hübner ◽  
Thibault Verhoeven ◽  
Konstantin Schmid ◽  
Klaus-Robert Müller ◽  
Michael Tangermann ◽  
...  

Author(s):  
Jiabin Liu ◽  
Bo Wang ◽  
Xin Shen ◽  
Zhiquan Qi ◽  
Yingjie Tian

Learning from label proportions (LLP) aims at learning an instance-level classifier with label proportions in grouped training data. Existing deep learning based LLP methods utilize end-to-end pipelines to obtain the proportional loss with Kullback-Leibler divergence between the bag-level prior and posterior class distributions. However, the unconstrained optimization on this objective can hardly reach a solution in accordance with the given proportions. Besides, concerning the probabilistic classifier, this strategy unavoidably results in high-entropy conditional class distributions at the instance level. These issues further degrade the performance of the instance-level classification. In this paper, we regard these problems as noisy pseudo labeling, and instead impose the strict proportion consistency on the classifier with a constrained optimization as a continuous training stage for existing LLP classifiers. In addition, we introduce the mixup strategy and symmetric cross-entropy to further reduce the label noise. Our framework is model-agnostic, and demonstrates compelling performance improvement in extensive experiments, when incorporated into other deep LLP models as a post-hoc phase.


2017 ◽  
Vol 10 (1) ◽  
pp. 187-205 ◽  
Author(s):  
Yong Shi ◽  
Limeng Cui ◽  
Zhensong Chen ◽  
Zhiquan Qi

2018 ◽  
Vol 103 ◽  
pp. 9-18 ◽  
Author(s):  
Yong Shi ◽  
Jiabin Liu ◽  
Zhiquan Qi ◽  
Bo Wang

Sign in / Sign up

Export Citation Format

Share Document