scholarly journals Some relations between variational-like inequality problems and vectorial optimization problems in Banach spaces

2008 ◽  
Vol 55 (8) ◽  
pp. 1808-1814 ◽  
Author(s):  
Lucelina Batista dos Santos ◽  
Gabriel Ruiz-Garzón ◽  
Marko A. Rojas-Medar ◽  
Antonio Rufián-Lizana
2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Yuanheng Wang ◽  
Xiuping Wu ◽  
Chanjuan Pan

AbstractIn this paper, we propose an iteration algorithm for finding a split common fixed point of an asymptotically nonexpansive mapping in the frameworks of two real Banach spaces. Under some suitable conditions imposed on the sequences of parameters, some strong convergence theorems are proved, which also solve some variational inequalities that are closely related to optimization problems. The results here generalize and improve the main results of other authors.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Qinghai He ◽  
Weili Kong

In general Banach spaces, we consider a vector optimization problem (SVOP) in which the objective is a set-valued mapping whose graph is the union of finitely many polyhedra or the union of finitely many generalized polyhedra. Dropping the compactness assumption, we establish some results on structure of the weak Pareto solution set, Pareto solution set, weak Pareto optimal value set, and Pareto optimal value set of (SVOP) and on connectedness of Pareto solution set and Pareto optimal value set of (SVOP). In particular, we improved and generalize, Arrow, Barankin, and Blackwell’s classical results in Euclidean spaces and Zheng and Yang’s results in general Banach spaces.


Author(s):  
Michael Unser

Abstract Regularization addresses the ill-posedness of the training problem in machine learning or the reconstruction of a signal from a limited number of measurements. The method is applicable whenever the problem is formulated as an optimization task. The standard strategy consists in augmenting the original cost functional by an energy that penalizes solutions with undesirable behavior. The effect of regularization is very well understood when the penalty involves a Hilbertian norm. Another popular configuration is the use of an $$\ell _1$$ ℓ 1 -norm (or some variant thereof) that favors sparse solutions. In this paper, we propose a higher-level formulation of regularization within the context of Banach spaces. We present a general representer theorem that characterizes the solutions of a remarkably broad class of optimization problems. We then use our theorem to retrieve a number of known results in the literature such as the celebrated representer theorem of machine leaning for RKHS, Tikhonov regularization, representer theorems for sparsity promoting functionals, the recovery of spikes, as well as a few new ones.


2021 ◽  
Vol 5 ◽  
pp. 82-92
Author(s):  
Sergei Denisov ◽  
◽  
Vladimir Semenov ◽  

Many problems of operations research and mathematical physics can be formulated in the form of variational inequalities. The development and research of algorithms for solving variational inequalities is an actively developing area of applied nonlinear analysis. Note that often nonsmooth optimization problems can be effectively solved if they are reformulated in the form of saddle point problems and algorithms for solving variational inequalities are applied. Recently, there has been progress in the study of algorithms for problems in Banach spaces. This is due to the wide involvement of the results and constructions of the geometry of Banach spaces. A new algorithm for solving variational inequalities in a Banach space is proposed and studied. In addition, the Alber generalized projection is used instead of the metric projection onto the feasible set. An attractive feature of the algorithm is only one computation at the iterative step of the projection onto the feasible set. For variational inequalities with monotone Lipschitz operators acting in a 2-uniformly convex and uniformly smooth Banach space, a theorem on the weak convergence of the method is proved.


1997 ◽  
Vol 2 (1-2) ◽  
pp. 97-120 ◽  
Author(s):  
Y. I. Alber ◽  
R. S. Burachik ◽  
A. N. Iusem

In this paper we show the weak convergence and stability of the proximal point method when applied to the constrained convex optimization problem in uniformly convex and uniformly smooth Banach spaces. In addition, we establish a nonasymptotic estimate of convergence rate of the sequence of functional values for the unconstrained case. This estimate depends on a geometric characteristic of the dual Banach space, namely its modulus of convexity. We apply a new technique which includes Banach space geometry, estimates of duality mappings, nonstandard Lyapunov functionals and generalized projection operators in Banach spaces.


Sign in / Sign up

Export Citation Format

Share Document