scholarly journals Smoothing augmented Lagrangian method for nonsmooth constrained optimization problems

2014 ◽  
Vol 62 (4) ◽  
pp. 675-694 ◽  
Author(s):  
Mengwei Xu ◽  
Jane J. Ye ◽  
Liwei Zhang
2018 ◽  
Vol 175 (1-2) ◽  
pp. 503-536 ◽  
Author(s):  
Natashia Boland ◽  
Jeffrey Christiansen ◽  
Brian Dandurand ◽  
Andrew Eberhard ◽  
Fabricio Oliveira

Author(s):  
Christian Kanzow ◽  
Andreas B. Raharja ◽  
Alexandra Schwartz

AbstractA reformulation of cardinality-constrained optimization problems into continuous nonlinear optimization problems with an orthogonality-type constraint has gained some popularity during the last few years. Due to the special structure of the constraints, the reformulation violates many standard assumptions and therefore is often solved using specialized algorithms. In contrast to this, we investigate the viability of using a standard safeguarded multiplier penalty method without any problem-tailored modifications to solve the reformulated problem. We prove global convergence towards an (essentially strongly) stationary point under a suitable problem-tailored quasinormality constraint qualification. Numerical experiments illustrating the performance of the method in comparison to regularization-based approaches are provided.


Author(s):  
Joachim Giesen ◽  
Soeren Laue

Many machine learning methods entail minimizing a loss-function that is the sum of the losses for each data point. The form of the loss function is exploited algorithmically, for instance in stochastic gradient descent (SGD) and in the alternating direction method of multipliers (ADMM). However, there are also machine learning methods where the entailed optimization problem features the data points not in the objective function but in the form of constraints, typically one constraint per data point. Here, we address the problem of solving convex optimization problems with many convex constraints. Our approach is an extension of ADMM. The straightforward implementation of ADMM for solving constrained optimization problems in a distributed fashion solves constrained subproblems on different compute nodes that are aggregated until a consensus solution is reached. Hence, the straightforward approach has three nested loops: one for reaching consensus, one for the constraints, and one for the unconstrained problems. Here, we show that solving the costly constrained subproblems can be avoided. In our approach, we combine the ability of ADMM to solve convex optimization problems in a distributed setting with the ability of the augmented Lagrangian method to solve constrained optimization problems. Consequently, our algorithm only needs two nested loops. We prove that it inherits the convergence guarantees of both ADMM and the augmented Lagrangian method. Experimental results corroborate our theoretical findings.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Yu Gao ◽  
Jingzhi Li ◽  
Yongcun Song ◽  
Chao Wang ◽  
Kai Zhang

Abstract We consider the optimal control problems constrained by Stokes equations. It has been shown in the literature, the problem can be discretized by the finite element method to generate a discrete system, and the error estimate has also been established. In this paper, we focus on solving the discrete system by the alternating splitting augmented Lagrangian method, which is a direct extension of alternating direction method of multipliers and possesses a global O ⁢ ( 1 / k ) \mathcal{O}({1}/{k}) convergence rate. In addition, we propose an acceleration scheme based on the alternating splitting augmented Lagrangian method to improve the efficiency of the algorithm. The error estimates and convergence analysis of our algorithms are presented for several different types of optimization problems. Finally, numerical experiments are performed to verify the efficiency of the algorithms.


Sign in / Sign up

Export Citation Format

Share Document