augmented lagrangian function
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 2)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 30 (01) ◽  
pp. 2140007
Author(s):  
Chengchen Dai ◽  
Hangjun Che ◽  
Man-Fai Leung

This paper presents a neurodynamic optimization approach for l1 minimization based on an augmented Lagrangian function. By using the threshold function in locally competitive algorithm (LCA), subgradient at a nondifferential point is equivalently replaced with the difference of the neuronal state and its mapping. The efficacy of the proposed approach is substantiated by reconstructing three compressed images.


2021 ◽  
Vol 3 (1) ◽  
pp. 89-117
Author(s):  
Yangyang Xu

First-order methods (FOMs) have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two FOMs for constrained convex programs, where the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classical augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but use different primal updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence and global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising, convex quadratically constrained quadratic programs, and the Neyman-Pearson classification problem to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.


2017 ◽  
Vol 34 (06) ◽  
pp. 1750030 ◽  
Author(s):  
Zhongming Wu ◽  
Min Li ◽  
David Z. W. Wang ◽  
Deren Han

In this paper, we propose a symmetric alternating method of multipliers for minimizing the sum of two nonconvex functions with linear constraints, which contains the classic alternating direction method of multipliers in the algorithm framework. Based on the powerful Kurdyka–Łojasiewicz property, and under some assumptions about the penalty parameter and objective function, we prove that each bounded sequence generated by the proposed method globally converges to a critical point of the augmented Lagrangian function associated with the given problem. Moreover, we report some preliminary numerical results on solving [Formula: see text] regularized sparsity optimization and nonconvex feasibility problems to indicate the feasibility and effectiveness of the proposed method.


Author(s):  
Hong-Shuang Li ◽  
Qiao-Yue Dong ◽  
Jiao-Yang Yuan

Stochastic optimization methods have been widely employed to find solutions to structural design optimization problems in the past two decades, especially for truss structures. The primary aim of this study is to introduce a design optimization method combining an augmented Lagrangian function and teaching–learning-based optimization for truss and nontruss structural design optimization. The augmented Lagrangian function serves as a constraint-handling tool in the proposed method and converts a constrained optimization problem into an unconstrained one. On the other hand, teaching–learning-based optimization is employed to resolve the transformed, unconstrained optimization problems. Since the proper values of the Lagrangian multipliers and penalty factors are unknown in advance, the proposed method is implemented in an iterative way to avoid the issue of selecting them, i.e. the Lagrangian multipliers and penalty factors are automatically updated according to the violation level of all constraints. To examine the performance of the proposed method, it is applied on a group of benchmark truss optimization problems and a group of nontruss optimization problems of aircraft wing structures. The computational results obtained by the proposed method are compared to the results produced by both other version of teaching–learning-based optimization and stochastic optimization methods.


2017 ◽  
Vol 2017 ◽  
pp. 1-9
Author(s):  
Hao Zhang ◽  
Qin Ni

We propose a new method for equality constrained optimization based on augmented Lagrangian method. We construct an unconstrained subproblem by adding an adaptive quadratic term to the quadratic model of augmented Lagrangian function. In each iteration, we solve this unconstrained subproblem to obtain the trial step. The main feature of this work is that the subproblem can be more easily solved. Numerical results show that this method is effective.


2011 ◽  
Vol 467-469 ◽  
pp. 877-881
Author(s):  
Ai Ping Jiang ◽  
Feng Wen Huang

In this paper, two modifications are proposed for minimizing the nonlinear optimization problem (NLP) based on Fletcher and Leyffer’s filter method which is different from traditional merit function with penalty term. We firstly modify one component of filter pairs with NCP function instead of violation constrained function in order to avoid the difficulty of selecting penalty parameters. We also proved that the modified algorithm is globally and super linearly convergent under certain conditions. We secondly convert objective function to augmented Lagrangian function in case of incompatibility caused by sub-problems.


Sign in / Sign up

Export Citation Format

Share Document