scholarly journals Managing randomization in the multi-block alternating direction method of multipliers for quadratic optimization

Author(s):  
Krešimir Mihić ◽  
Mingxi Zhu ◽  
Yinyu Ye

Abstract The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.

2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Hansi K. Abeynanda ◽  
G. H. J. Lanel

Distributed optimization is a very important concept with applications in control theory and many related fields, as it is high fault-tolerant and extremely scalable compared with centralized optimization. Centralized solution methods are not suitable for many application domains that consist of large number of networked systems. In general, these large-scale networked systems cooperatively find an optimal solution to a common global objective during the optimization process. Thus, it gives us an opportunity to analyze distributed optimization techniques that is demanded in most distributed optimization settings. This paper presents an analysis that provides an overview of decomposition methods as well as currently existing distributed methods and techniques that are employed in large-scale networked systems. A detailed analysis on gradient like methods, subgradient methods, and methods of multipliers including the alternating direction method of multipliers is presented. These methods are analyzed empirically by using numerical examples. Moreover, an example highlighting the fact that the gradient method fails to solve distributed problems in some circumstances is discussed under numerical results. A numerical implementation is used to demonstrate that the alternating direction method of multipliers can solve this particular problem, by revealing its robustness compared with the gradient method. Finally, we conclude the paper with possible future research directions.


2021 ◽  
Author(s):  
Miantao Chao ◽  
Liqun Liu

Abstract In this paper, we propose a dynamic alternating direction method of multipliers for two-block separable optimization problems. The well-known classical ADMM can be obtained after the time discretization of the dynamical system. Under suitable condition, we prove that the trajectory asymptotically converges to a saddle point of the Lagrangian function of the problems. When the coefficient matrices in the constraint are identiy matrices, we prove the worst-case O(1/t) convergence rate in ergodic sense.


2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
Yu Li ◽  
Qiming Zou ◽  
Xiaoru Ji ◽  
Chanyuan Zhang ◽  
Ke Lu

Model Predictive Control (MPC) can effectively handle control problem with disturbances, multicontrol variables, and complex constraints and is widely used in various control systems. In MPC, the control input at each time step is obtained by solving an online optimization problem, which will cause a time delay in real time on embedded computers with limited computational resources. In this paper, we utilize adaptive Alternating Direction Method of Multipliers (a-ADMM) to accelerate the solution of MPC. This method adaptively adjusts penalty parameter to balance the value of primal residual and dual residual. The performance of this approach is profiled via the control of a quadcopter with 12 states and 4 controls and prediction horizon ranging from 10 to 40. The simulation results demonstrate that the MPC based on a-ADMM has a significant improvement in real-time and convergence performance and thus is more suitable for solving large-scale optimal control problems.


Sign in / Sign up

Export Citation Format

Share Document