bregman distance
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 14 (2) ◽  
pp. 814-843
Author(s):  
Martin Benning ◽  
Marta M. Betcke ◽  
Matthias J. Ehrhardt ◽  
Carola-Bibiane Schönlieb

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Jin-Zan Liu ◽  
Xin-Wei Liu

<p style='text-indent:20px;'>We consider a convex composite minimization problem, whose objective is the sum of a relatively-strongly convex function and a closed proper convex function. A dual Bregman proximal gradient method is proposed for solving this problem and is shown that the convergence rate of the primal sequence is <inline-formula><tex-math id="M1">\begin{document}$ O(\frac{1}{k}) $\end{document}</tex-math></inline-formula>. Moreover, based on the acceleration scheme, we prove that the convergence rate of the primal sequence is <inline-formula><tex-math id="M2">\begin{document}$ O(\frac{1}{k^{\gamma}}) $\end{document}</tex-math></inline-formula>, where <inline-formula><tex-math id="M3">\begin{document}$ \gamma\in[1,2] $\end{document}</tex-math></inline-formula> is determined by the triangle scaling property of the Bregman distance.</p>


2021 ◽  
Vol 31 (1) ◽  
pp. 404-424
Author(s):  
Regina S. Burachik ◽  
Minh N. Dao ◽  
Scott B. Lindstrom
Keyword(s):  

Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2007
Author(s):  
Lateef Olakunle Jolaoso ◽  
Maggie Aphane ◽  
Safeer Hussain Khan

Studying Bregman distance iterative methods for solving optimization problems has become an important and very interesting topic because of the numerous applications of the Bregman distance techniques. These applications are based on the type of convex functions associated with the Bregman distance. In this paper, two different extragraident methods were proposed for studying pseudomonotone variational inequality problems using Bregman distance in real Hilbert spaces. The first algorithm uses a fixed stepsize which depends on a prior estimate of the Lipschitz constant of the cost operator. The second algorithm uses a self-adaptive stepsize which does not require prior estimate of the Lipschitz constant of the cost operator. Some convergence results were proved for approximating the solutions of pseudomonotone variational inequality problem under standard assumptions. Moreso, some numerical experiments were also given to illustrate the performance of the proposed algorithms using different convex functions such as the Shannon entropy and the Burg entropy. In addition, an application of the result to a signal processing problem is also presented.


2020 ◽  
Vol 185 (2) ◽  
pp. 327-342
Author(s):  
Mohamed Soueycatt ◽  
Yara Mohammad ◽  
Yamar Hamwi

Author(s):  
Hui Zhang ◽  
Yu-Hong Dai ◽  
Lei Guo ◽  
Wei Peng

We introduce a unified algorithmic framework, called the proximal-like incremental aggregated gradient (PLIAG) method, for minimizing the sum of a convex function that consists of additive relatively smooth convex components and a proper lower semicontinuous convex regularization function over an abstract feasible set whose geometry can be captured by using the domain of a Legendre function. The PLIAG method includes many existing algorithms in the literature as special cases, such as the proximal gradient method, the Bregman proximal gradient method (also called the NoLips algorithm), the incremental aggregated gradient method, the incremental aggregated proximal method, and the proximal incremental aggregated gradient method. It also includes some novel interesting iteration schemes. First, we show that the PLIAG method is globally sublinearly convergent without requiring a growth condition, which extends the sublinear convergence result for the proximal gradient algorithm to incremental aggregated-type first-order methods. Then, by embedding a so-called Bregman distance growth condition into a descent-type lemma to construct a special Lyapunov function, we show that the PLIAG method is globally linearly convergent in terms of both function values and Bregman distances to the optimal solution set, provided that the step size is not greater than some positive constant. The convergence results derived in this paper are all established beyond the standard assumptions in the literature (i.e., without requiring the strong convexity and the Lipschitz gradient continuity of the smooth part of the objective). When specialized to many existing algorithms, our results recover or supplement their convergence results under strictly weaker conditions.


Sign in / Sign up

Export Citation Format

Share Document