Topic-based label distribution learning to exploit label ambiguity for scene classification

Author(s):  
Jianqiao Luo ◽  
Biao He ◽  
Yang Ou ◽  
Bailin Li ◽  
Kai Wang
2021 ◽  
Vol 13 (4) ◽  
pp. 755
Author(s):  
Jianqiao Luo ◽  
Yihan Wang ◽  
Yang Ou ◽  
Biao He ◽  
Bailin Li

Many aerial images with similar appearances have different but correlated scene labels, which causes the label ambiguity. Label distribution learning (LDL) can express label ambiguity by giving each sample a label distribution. Thus, a sample contributes to the learning of its ground-truth label as well as correlated labels, which improve data utilization. LDL has gained success in many fields, such as age estimation, in which label ambiguity can be easily modeled on the basis of the prior knowledge about local sample similarity and global label correlations. However, LDL has never been applied to scene classification, because there is no knowledge about the local similarity and label correlations and thus it is hard to model label ambiguity. In this paper, we uncover the sample neighbors that cause label ambiguity by jointly capturing the local similarity and label correlations and propose neighbor-based LDL (N-LDL) for aerial scene classification. We define a subspace learning problem, which formulates the neighboring relations as a coefficient matrix that is regularized by a sparse constraint and label correlations. The sparse constraint provides a few nearest neighbors, which captures local similarity. The label correlations are predefined according to the confusion matrices on validation sets. During subspace learning, the neighboring relations are encouraged to agree with the label correlations, which ensures that the uncovered neighbors have correlated labels. Finally, the label propagation among the neighbors forms the label distributions, which leads to label smoothing in terms of label ambiguity. The label distributions are used to train convolutional neural networks (CNNs). Experiments on the aerial image dataset (AID) and NWPU_RESISC45 (NR) datasets demonstrate that using the label distributions clearly improves the classification performance by assisting feature learning and mitigating over-fitting problems, and our method achieves state-of-the-art performance.


2021 ◽  
Vol 436 ◽  
pp. 12-21
Author(s):  
Xinyue Dong ◽  
Shilin Gu ◽  
Wenzhang Zhuge ◽  
Tingjin Luo ◽  
Chenping Hou

Author(s):  
Xiuyi Jia ◽  
Xiaoxia Shen ◽  
Weiwei Li ◽  
Yunan Lu ◽  
Jihua Zhu

Author(s):  
Yongbiao Gao ◽  
Yu Zhang ◽  
Xin Geng

Label distribution learning (LDL) is a novel machine learning paradigm that gives a description degree of each label to an instance. However, most of training datasets only contain simple logical labels rather than label distributions due to the difficulty of obtaining the label distributions directly. We propose to use the prior knowledge to recover the label distributions. The process of recovering the label distributions from the logical labels is called label enhancement. In this paper, we formulate the label enhancement as a dynamic decision process. Thus, the label distribution is adjusted by a series of actions conducted by a reinforcement learning agent according to sequential state representations. The target state is defined by the prior knowledge. Experimental results show that the proposed approach outperforms the state-of-the-art methods in both age estimation and image emotion recognition.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 63961-63970
Author(s):  
Heng-Ru Zhang ◽  
Yu-Ting Huang ◽  
Yuan-Yuan Xu ◽  
Fan Min

2019 ◽  
Vol 11 (1) ◽  
pp. 111-121
Author(s):  
Xue-Qiang Zeng ◽  
Su-Fen Chen ◽  
Run Xiang ◽  
Guo-Zheng Li ◽  
Xue-Feng Fu

Sign in / Sign up

Export Citation Format

Share Document