Barrier Methods for Large-Scale Quadratic Programming

Author(s):  
Dulce B. Ponceleon
Author(s):  
Krešimir Mihić ◽  
Mingxi Zhu ◽  
Yinyu Ye

Abstract The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.


2014 ◽  
Vol 30 (1) ◽  
pp. 191-214 ◽  
Author(s):  
Attila Kozma ◽  
Christian Conte ◽  
Moritz Diehl

2006 ◽  
Vol 27 (3) ◽  
pp. 383-391
Author(s):  
Yun-kang Sui ◽  
Jia-zheng Du ◽  
Ying-qiao Guo

1996 ◽  
Vol 62 (1) ◽  
pp. 419-437 ◽  
Author(s):  
Pault T. Boggs ◽  
Paul D. Domich ◽  
Janet E. Rogers

Acta Numerica ◽  
1995 ◽  
Vol 4 ◽  
pp. 1-51 ◽  
Author(s):  
Paul T. Boggs ◽  
Jon W. Tolle

Since its popularization in the late 1970s, Sequential Quadratic Programming (SQP) has arguably become the most successful method for solving nonlinearly constrained optimization problems. As with most optimization methods, SQP is not a single algorithm, but rather a conceptual method from which numerous specific algorithms have evolved. Backed by a solid theoretical and computational foundation, both commercial and public-domain SQP algorithms have been developed and used to solve a remarkably large set of important practical problems. Recently large-scale versions have been devised and tested with promising results.


2013 ◽  
Vol 312 ◽  
pp. 771-776
Author(s):  
Min Juan Zheng ◽  
Guo Jian Cheng ◽  
Fei Zhao

The quadratic programming problem in the standard support vector machine (SVM) algorithm has high time complexity and space complexity in solving the large-scale problems which becomes a bottleneck in the SVM applications. Ball Vector Machine (BVM) converts the quadratic programming problem of the traditional SVM into the minimum enclosed ball problem (MEB). It can indirectly get the solution of quadratic programming through solving the MEB problem which significantly reduces the time complexity and space complexity. The experiments show that when handling five large-scale and high-dimensional data sets, the BVM and standard SVM have a considerable accuracy, but the BVM has higher speed and less requirement space than standard SVM.


Sign in / Sign up

Export Citation Format

Share Document