An Augmented Lagrangian Method for $\ell_{1}$-Regularized Optimization Problems with Orthogonality Constraints

2016 ◽  
Vol 38 (4) ◽  
pp. B570-B592 ◽  
Author(s):  
Weiqiang Chen ◽  
Hui Ji ◽  
Yanfei You
2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Yu Gao ◽  
Jingzhi Li ◽  
Yongcun Song ◽  
Chao Wang ◽  
Kai Zhang

Abstract We consider the optimal control problems constrained by Stokes equations. It has been shown in the literature, the problem can be discretized by the finite element method to generate a discrete system, and the error estimate has also been established. In this paper, we focus on solving the discrete system by the alternating splitting augmented Lagrangian method, which is a direct extension of alternating direction method of multipliers and possesses a global O ⁢ ( 1 / k ) \mathcal{O}({1}/{k}) convergence rate. In addition, we propose an acceleration scheme based on the alternating splitting augmented Lagrangian method to improve the efficiency of the algorithm. The error estimates and convergence analysis of our algorithms are presented for several different types of optimization problems. Finally, numerical experiments are performed to verify the efficiency of the algorithms.


Author(s):  
Joachim Giesen ◽  
Soeren Laue

Many machine learning methods entail minimizing a loss-function that is the sum of the losses for each data point. The form of the loss function is exploited algorithmically, for instance in stochastic gradient descent (SGD) and in the alternating direction method of multipliers (ADMM). However, there are also machine learning methods where the entailed optimization problem features the data points not in the objective function but in the form of constraints, typically one constraint per data point. Here, we address the problem of solving convex optimization problems with many convex constraints. Our approach is an extension of ADMM. The straightforward implementation of ADMM for solving constrained optimization problems in a distributed fashion solves constrained subproblems on different compute nodes that are aggregated until a consensus solution is reached. Hence, the straightforward approach has three nested loops: one for reaching consensus, one for the constraints, and one for the unconstrained problems. Here, we show that solving the costly constrained subproblems can be avoided. In our approach, we combine the ability of ADMM to solve convex optimization problems in a distributed setting with the ability of the augmented Lagrangian method to solve constrained optimization problems. Consequently, our algorithm only needs two nested loops. We prove that it inherits the convergence guarantees of both ADMM and the augmented Lagrangian method. Experimental results corroborate our theoretical findings.


2018 ◽  
Vol 175 (1-2) ◽  
pp. 503-536 ◽  
Author(s):  
Natashia Boland ◽  
Jeffrey Christiansen ◽  
Brian Dandurand ◽  
Andrew Eberhard ◽  
Fabricio Oliveira

Sign in / Sign up

Export Citation Format

Share Document