scholarly journals Theoretical Analysis of Label Distribution Learning

Author(s):  
Jing Wang ◽  
Xin Geng

As a novel learning paradigm, label distribution learning (LDL) explicitly models label ambiguity with the definition of label description degree. Although lots of work has been done to deal with real-world applications, theoretical results on LDL remain unexplored. In this paper, we rethink LDL from theoretical aspects, towards analyzing learnability of LDL. Firstly, risk bounds for three representative LDL algorithms (AA-kNN, AA-BP and SA-ME) are provided. For AA-kNN, Lipschitzness of the label distribution function is assumed to bound the risk, and for AA-BP and SA-ME, rademacher complexity is utilized to give data-dependent risk bounds. Secondly, a generalized plug-in decision theorem is proposed to understand the relation between LDL and classification, uncovering that approximation to the conditional probability distribution function in absolute loss guarantees approaching to the optimal classifier, and also data-dependent error probability bounds are presented for the corresponding LDL algorithms to perform classification. As far as we know, this is perhaps the first research on theory of LDL.

2020 ◽  
Vol 34 (04) ◽  
pp. 5932-5939
Author(s):  
Haoyu Tang ◽  
Jihua Zhu ◽  
Qinghai Zheng ◽  
Jun Wang ◽  
Shanmin Pang ◽  
...  

Compared with single-label and multi-label annotations, label distribution describes the instance by multiple labels with different intensities and accommodates to more-general conditions. Nevertheless, label distribution learning is unavailable in many real-world applications because most existing datasets merely provide logical labels. To handle this problem, a novel label enhancement method, Label Enhancement with Sample Correlations via low-rank representation, is proposed in this paper. Unlike most existing methods, a low-rank representation method is employed so as to capture the global relationships of samples and predict implicit label correlation to achieve label enhancement. Extensive experiments on 14 datasets demonstrate that the algorithm accomplishes state-of-the-art results as compared to previous label enhancement baselines.


2021 ◽  
Vol 436 ◽  
pp. 12-21
Author(s):  
Xinyue Dong ◽  
Shilin Gu ◽  
Wenzhang Zhuge ◽  
Tingjin Luo ◽  
Chenping Hou

2003 ◽  
Vol 2003 (55) ◽  
pp. 3479-3501 ◽  
Author(s):  
C. Atindogbe ◽  
J.-P. Ezin ◽  
Joël Tossa

Let(M,g)be a smooth manifoldMendowed with a metricg. A large class of differential operators in differential geometry is intrinsically defined by means of the dual metricg∗on the dual bundleTM∗of 1-forms onM. If the metricgis (semi)-Riemannian, the metricg∗is just the inverse ofg. This paper studies the definition of the above-mentioned geometric differential operators in the case of manifolds endowed with degenerate metrics for whichg∗is not defined. We apply the theoretical results to Laplacian-type operator on a lightlike hypersurface to deduce a Takahashi-like theorem (Takahashi (1966)) for lightlike hypersurfaces in Lorentzian spaceℝ1n+2.


Author(s):  
Xiuyi Jia ◽  
Xiaoxia Shen ◽  
Weiwei Li ◽  
Yunan Lu ◽  
Jihua Zhu

2011 ◽  
Vol 11 (4-5) ◽  
pp. 611-627
Author(s):  
ANTÓNIO PORTO

AbstractProlog's very useful expressive power is not captured by traditional logic programming semantics, due mainly to the cut and goal and clause order. Several alternative semantics have been put forward, exposing operational details of the computation state. We propose instead to redesign Prolog around structured alternatives to the cut and clauses, keeping the expressive power and computation model but with a compositional denotational semantics over much simpler states—just variable bindings. This considerably eases reasoning about programs, by programmers and tools such as a partial evaluator, with safe unfolding of calls through predicate definitions. Anif-then-elseacross clauses replaces most uses of the cut, but the cut's full power is achieved by anuntilconstruct. Disjunction, conjunction anduntil, along with unification, are the primitive goal types with a compositional semantics yielding sequences of variable-binding solutions. This extends to programs via the usual technique of a least fixpoint construction. A simple interpreter for Prolog in the alternative language, and a definition ofuntilin Prolog, establish the identical expressive power of the two languages. Many useful control constructs are derivable from the primitives, and the semantic framework illuminates the discussion of alternative ones. The formalisation rests on a term language with variable abstraction as in the λ-calculus. A clause is an abstraction on the call arguments, a continuation, and the local variables. It can be inclusive or exclusive, expressing a local case bound to a continuation by either a disjunction or anif-then-else. Clauses are open definitions, composed (and closed) with simple functional application β-reduction). This paves the way for a simple account of flexible module composition mechanisms.Cube, a concrete language with the exposed principles, has been implemented on top of a Prolog engine and successfully used to build large real-world applications.


2020 ◽  
Vol 49 (1) ◽  
pp. 1-23
Author(s):  
Shunpu Zhang ◽  
Zhong Li ◽  
Zhiying Zhang

Estimation of distribution functions has many real-world applications. We study kernel estimation of a distribution function when the density function has compact support. We show that, for densities taking value zero at the endpoints of the support, the kernel distribution estimator does not need boundary correction. Otherwise, boundary correction is necessary. In this paper, we propose a boundary distribution kernel estimator which is free of boundary problem and provides non-negative and non-decreasing distribution estimates between zero and one. Extensive simulation results show that boundary distribution kernel estimator provides better distribution estimates than the existing boundary correction methods. For practical application of the proposed methods, a data-dependent method for choosing the bandwidth is also proposed.


Author(s):  
Yongbiao Gao ◽  
Yu Zhang ◽  
Xin Geng

Label distribution learning (LDL) is a novel machine learning paradigm that gives a description degree of each label to an instance. However, most of training datasets only contain simple logical labels rather than label distributions due to the difficulty of obtaining the label distributions directly. We propose to use the prior knowledge to recover the label distributions. The process of recovering the label distributions from the logical labels is called label enhancement. In this paper, we formulate the label enhancement as a dynamic decision process. Thus, the label distribution is adjusted by a series of actions conducted by a reinforcement learning agent according to sequential state representations. The target state is defined by the prior knowledge. Experimental results show that the proposed approach outperforms the state-of-the-art methods in both age estimation and image emotion recognition.


Sign in / Sign up

Export Citation Format

Share Document