Modified proximal symmetric ADMMs for multi-block separable convex optimization with linear constraints

2021 ◽  
pp. 1-28
Author(s):  
Yuan Shen ◽  
Yannian Zuo ◽  
Liming Sun ◽  
Xiayang Zhang

We consider the linearly constrained separable convex optimization problem whose objective function is separable with respect to [Formula: see text] blocks of variables. A bunch of methods have been proposed and extensively studied in the past decade. Specifically, a modified strictly contractive Peaceman–Rachford splitting method (SC-PRCM) [S. H. Jiang and M. Li, A modified strictly contractive Peaceman–Rachford splitting method for multi-block separable convex programming, J. Ind. Manag. Optim. 14(1) (2018) 397-412] has been well studied in the literature for the special case of [Formula: see text]. Based on the modified SC-PRCM, we present modified proximal symmetric ADMMs (MPSADMMs) to solve the multi-block problem. In MPSADMMs, all subproblems but the first one are attached with a simple proximal term, and the multipliers are updated twice. At the end of each iteration, the output is corrected via a simple correction step. Without stringent assumptions, we establish the global convergence result and the [Formula: see text] convergence rate in the ergodic sense for the new algorithms. Preliminary numerical results show that our proposed algorithms are effective for solving the linearly constrained quadratic programming and the robust principal component analysis problems.

2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Hongchun Sun ◽  
Jing Liu ◽  
Min Sun

As a special three-block separable convex programming, the stable principal component pursuit (SPCP) arises in many different disciplines, such as statistical learning, signal processing, and web data ranking. In this paper, we propose a proximal fully parallel splitting method (PFPSM) for solving SPCP, in which the resulting subproblems all admit closed-form solutions and can be solved in distributed manners. Compared with other similar algorithms in the literature, PFPSM attaches a Glowinski relaxation factor η∈3/2,2/3 to the updating formula for its Lagrange multiplier, which can be used to accelerate the convergence of the generated sequence. Under mild conditions, the global convergence of PFPSM is proved. Preliminary computational results show that the proposed algorithm works very well in practice.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Yanfei You ◽  
Suhong Jiang

<p style='text-indent:20px;'>This paper presents an improved Lagrangian-PPA based prediction correction method to solve linearly constrained convex optimization problem. At each iteration, the predictor is achieved by minimizing the proximal Lagrangian function with respect to the primal and dual variables. These optimization subproblems involved either admit analytical solutions or can be solved by a fast algorithm. The new update is generated by using the information of the current iterate and the predictor, as well as an appropriately chosen stepsize. Compared with the existing PPA based method, the parameters are relaxed. We also establish the convergence and convergence rate of the proposed method. Finally, numerical experiments are conducted to show the efficiency of our Lagrangian-PPA based prediction correction method.</p>


Sign in / Sign up

Export Citation Format

Share Document