Preliminaries: Convex Analysis and Convex Programming

Author(s):  
Stefan M. Stefanov
2005 ◽  
Vol 2005 (19) ◽  
pp. 3175-3183 ◽  
Author(s):  
Ahmed Addou ◽  
Abdenasser Benahmed

We give in this paper a convergence result concerning parallel synchronous algorithm for nonlinear fixed point problems with respect to the Euclidian norm inℝn. We then apply this result to some problems related to convex analysis like minimization of functionals, calculus of saddle point, and convex programming.


Filomat ◽  
2012 ◽  
Vol 26 (1) ◽  
pp. 55-65 ◽  
Author(s):  
Delavar Khalafi ◽  
Bijan Davvaz

In this paper, we generalize some concepts of convex analysis such as convex functions and linear functions on hyper-structures. Based on new definitions we obtain some important results in convex programming. A few suitable examples have been given for better understanding.


Author(s):  
Mario A. Rotea ◽  
Pramod P. Khargonekar
Keyword(s):  

Author(s):  
Radu Boţ ◽  
Guozhi Dong ◽  
Peter Elbau ◽  
Otmar Scherzer

AbstractRecently, there has been a great interest in analysing dynamical flows, where the stationary limit is the minimiser of a convex energy. Particular flows of great interest have been continuous limits of Nesterov’s algorithm and the fast iterative shrinkage-thresholding algorithm, respectively. In this paper, we approach the solutions of linear ill-posed problems by dynamical flows. Because the squared norm of the residual of a linear operator equation is a convex functional, the theoretical results from convex analysis for energy minimising flows are applicable. However, in the restricted situation of this paper they can often be significantly improved. Moreover, since we show that the proposed flows for minimising the norm of the residual of a linear operator equation are optimal regularisation methods and that they provide optimal convergence rates for the regularised solutions, the given rates can be considered the benchmarks for further studies in convex analysis.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Darina Dvinskikh ◽  
Alexander Gasnikov

Abstract We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.


Sign in / Sign up

Export Citation Format

Share Document