On Primal and Dual Approaches for Distributed Stochastic Convex Optimization over Networks

Author(s):  
Darina Dvinskikh ◽  
Eduard Gorbunov ◽  
Alexander Gasnikov ◽  
Pavel Dvurechensky ◽  
Cesar A. Uribe
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Darina Dvinskikh ◽  
Alexander Gasnikov

Abstract We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.


2004 ◽  
Vol 21 (01) ◽  
pp. 9-33
Author(s):  
JAVIER SALMERÓN ◽  
ÁNGEL MARÍN

In this paper, we present an algorithm to solve a particular convex model explicitly. The model may massively arise when, for example, Benders decomposition or Lagrangean relaxation-decomposition is applied to solve large design problems in facility location and capacity expansion. To attain the optimal solution of the model, we analyze its Karush–Kuhn–Tucker optimality conditions and develop a constructive algorithm that provides the optimal primal and dual solutions. This approach yields better performance than other convex optimization techniques.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Yanfei You ◽  
Suhong Jiang

<p style='text-indent:20px;'>This paper presents an improved Lagrangian-PPA based prediction correction method to solve linearly constrained convex optimization problem. At each iteration, the predictor is achieved by minimizing the proximal Lagrangian function with respect to the primal and dual variables. These optimization subproblems involved either admit analytical solutions or can be solved by a fast algorithm. The new update is generated by using the information of the current iterate and the predictor, as well as an appropriately chosen stepsize. Compared with the existing PPA based method, the parameters are relaxed. We also establish the convergence and convergence rate of the proposed method. Finally, numerical experiments are conducted to show the efficiency of our Lagrangian-PPA based prediction correction method.</p>


Author(s):  
Goran Banjac ◽  
John Lygeros

AbstractBanjac et al. (J Optim Theory Appl 183(2):490–519, 2019) recently showed that the Douglas–Rachford algorithm provides certificates of infeasibility for a class of convex optimization problems. In particular, they showed that the difference between consecutive iterates generated by the algorithm converges to certificates of primal and dual strong infeasibility. Their result was shown in a finite-dimensional Euclidean setting and for a particular structure of the constraint set. In this paper, we extend the result to real Hilbert spaces and a general nonempty closed convex set. Moreover, we show that the proximal-point algorithm applied to the set of optimality conditions of the problem generates similar infeasibility certificates.


2012 ◽  
Vol 8 (12) ◽  
Author(s):  
Abdol Samad Nawi ◽  
Irwan Bin Ismail ◽  
Zainuddin Zakaria ◽  
Jannah Munirah Md Noor ◽  
Bashir Ahmad Bin Shabir Ahmad ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document