scholarly journals Modified Lagrangian Methods for Separable Optimization Problems

2012 ◽  
Vol 2012 ◽  
pp. 1-20 ◽  
Author(s):  
Abdelouahed Hamdi ◽  
Aiman A. Mukheimer

We propose a convergence analysis of a new decomposition method to solve structured optimization problems. The proposed scheme is based on a class of modified Lagrangians combined with the allocation of resources decomposition algorithm. Under mild assumptions, we show that the method generates convergent primal-dual sequences.

2016 ◽  
Vol 3 (3) ◽  
pp. 296-309 ◽  
Author(s):  
Sindri Magnusson ◽  
Pradeep Chathuranga Weeraddana ◽  
Michael G. Rabbat ◽  
Carlo Fischione

2020 ◽  
Vol 8 (3) ◽  
pp. 656-667
Author(s):  
Zhenguo Mu ◽  
Junfeng Yang

Stochastic programming is an approach for solving optimization problems with uncertainty data whose probability distribution is assumed to be known, and progressive hedging algorithm (PHA) is a well-known decomposition method for solving the underlying model. However, the per iteration computation of PHA could be very costly since it solves a large number of subproblems corresponding to all the scenarios. In this paper,  a stochastic variant of PHA is studied. At each iteration, only a small fraction of the scenarios are selected uniformly at random and the corresponding variable components are updated accordingly, while the variable components corresponding to those not selected scenarios are kept untouch. Therefore, the per iteration cost can be controlled freely to achieve very fast iterations. We show that, though the per iteration cost is reduced significantly, the proposed stochastic PHA converges in an ergodic sense at the same sublinear rate as the original PHA.


Author(s):  
Erik Papa Quiroz ◽  
Orlando Sarmiento ◽  
Paulo Roberto Oliveira

This paper presents an inexact proximal method for solving monotone variational inequality problems with a given separable structure. The proposed algorithm is a natural extension of the Proximal Multiplier Algorithm with Proximal Distances (PMAPD) proposed by Sarmiento et al. [Optimization, 65(2), (2016), pp. 501-537], which unified the works of Chen and Teboulle (PCPM method) and Kyono and Fukushima (NPCPMM) developed for solving convex programs with a particular separable structure. The resulting method combines the recent proximal distances theory introduced by Auslender and Teboulle [SIAM J. Optim., 16 (2006), pp. 697-725] with a decomposition method given by Chen and Teboulle for convex problems and extends the results of the Entropic Proximal Decomposition Method proposed by Auslender and Teboulle, which used to Logarithmic Quadratic proximal distances. Under some mild assumptions on the problem we prove a global convergence of the primal-dual sequences produced by the algorithm.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Stefan M. Stefanov

We focus on some convex separable optimization problems, considered by the author in previous papers, for which problems, necessary and sufficient conditions or sufficient conditions have been proved, and convergent algorithms of polynomial computational complexity have been proposed for solving these problems. The concepts of well-posedness of optimization problems in the sense of Tychonov, Hadamard, and in a generalized sense, as well as calmness in the sense of Clarke, are discussed. It is shown that the convex separable optimization problems under consideration are calm in the sense of Clarke. The concept of stability of the set of saddle points of the Lagrangian in the sense of Gol'shtein is also discussed, and it is shown that this set is not stable for the “classical” Lagrangian. However, it turns out that despite this instability, due to the specificity of the approach, suggested by the author for solving problems under consideration, it is not necessary to use modified Lagrangians but only the “classical” Lagrangians. Also, a primal-dual analysis for problems under consideration in view of methods for solving them is presented.


Sign in / Sign up

Export Citation Format

Share Document