scholarly journals Holistic Preference Learning with the Choquet Integral

Author(s):  
Benedicte Goujon ◽  
Christophe Labreuche
2012 ◽  
Vol 20 (6) ◽  
pp. 1102-1113 ◽  
Author(s):  
A. F. Tehrani ◽  
Weiwei Cheng ◽  
E. Hullermeier

Author(s):  
Kendall Taylor ◽  
Huong Ha ◽  
Minyi Li ◽  
Jeffrey Chan ◽  
Xiaodong Li

2019 ◽  
Vol 69 (4) ◽  
pp. 801-814 ◽  
Author(s):  
Sorin G. Gal

Abstract In this paper we introduce a new concept of Choquet-Stieltjes integral of f with respect to g on intervals, as a limit of Choquet integrals with respect to a capacity μ. For g(t) = t, one reduces to the usual Choquet integral and unlike the old known concept of Choquet-Stieltjes integral, for μ the Lebesgue measure, one reduces to the usual Riemann-Stieltjes integral. In the case of distorted Lebesgue measures, several properties of this new integral are obtained. As an application, the concept of Choquet line integral of second kind is introduced and some of its properties are obtained.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Author(s):  
Siva K. Kakula ◽  
Anthony J. Pinar ◽  
Timothy C. Havens ◽  
Derek T. Anderson

Sign in / Sign up

Export Citation Format

Share Document