Comments and Correction on ”U-Processes and Preference Learning” (Neural Computation Vol. 26, pp. 2896–2924, 2014)

2015 ◽  
Vol 27 (7) ◽  
pp. 1549-1553
Author(s):  
Wojciech Rejchel ◽  
Hong Li ◽  
Chuanbao Ren ◽  
Luoqing Li

This note corrects an error in the proof of corollary 1 of Li et al. ( 2014 ). The original claim of the contraction principle in appendix D of Li et al. no longer holds.

2009 ◽  
Vol 41 (5) ◽  
pp. 2399-2400 ◽  
Author(s):  
J. Martínez-Moreno ◽  
A. Roldán ◽  
C. Roldán

2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Adisorn Kittisopaporn ◽  
Pattrawut Chansangiam ◽  
Wicharn Lewkeeratiyutkul

AbstractWe derive an iterative procedure for solving a generalized Sylvester matrix equation $AXB+CXD = E$ A X B + C X D = E , where $A,B,C,D,E$ A , B , C , D , E are conforming rectangular matrices. Our algorithm is based on gradients and hierarchical identification principle. We convert the matrix iteration process to a first-order linear difference vector equation with matrix coefficient. The Banach contraction principle reveals that the sequence of approximated solutions converges to the exact solution for any initial matrix if and only if the convergence factor belongs to an open interval. The contraction principle also gives the convergence rate and the error analysis, governed by the spectral radius of the associated iteration matrix. We obtain the fastest convergence factor so that the spectral radius of the iteration matrix is minimized. In particular, we obtain iterative algorithms for the matrix equation $AXB=C$ A X B = C , the Sylvester equation, and the Kalman–Yakubovich equation. We give numerical experiments of the proposed algorithm to illustrate its applicability, effectiveness, and efficiency.


2021 ◽  
Vol 40 (5) ◽  
pp. 9977-9985
Author(s):  
Naeem Saleem ◽  
Hüseyin Işık ◽  
Salman Furqan ◽  
Choonkil Park

In this paper, we introduce the concept of fuzzy double controlled metric space that can be regarded as the generalization of fuzzy b-metric space, extended fuzzy b-metric space and controlled fuzzy metric space. We use two non-comparable functions α and β in the triangular inequality as: M q ( x , z , t α ( x , y ) + s β ( y , z ) ) ≥ M q ( x , y , t ) ∗ M q ( y , z , s ) . We prove Banach contraction principle in fuzzy double controlled metric space and generalize the Banach contraction principle in aforementioned spaces. We give some examples to support our main results. An application to existence and uniqueness of solution for an integral equation is also presented in this work.


Sign in / Sign up

Export Citation Format

Share Document