The superlinear convergence of a new quasi-Newton-SQP method for constrained optimization

2008 ◽  
Vol 196 (2) ◽  
pp. 791-801 ◽  
Author(s):  
Zengxin Wei ◽  
Liying Liu ◽  
Shengwei Yao
2007 ◽  
Vol 7 (10) ◽  
pp. 1422-1427
Author(s):  
Ajuan Ren . ◽  
Fujian Duan . ◽  
Zhibin Zhu . ◽  
Zhijun Luo .

2019 ◽  
Vol 74 (1) ◽  
pp. 177-194 ◽  
Author(s):  
Jae Hwa Lee ◽  
Yoon Mo Jung ◽  
Ya-xiang Yuan ◽  
Sangwoon Yun

Author(s):  
Anderas Griewank

Iterative methods for solving a square system of nonlinear equations g(x) = 0 often require that the sum of squares residual γ (x) ≡ ½∥g(x)∥2 be reduced at each step. Since the gradient of γ depends on the Jacobian ∇g, this stabilization strategy is not easily implemented if only approximations Bk to ∇g are available. Therefore most quasi-Newton algorithms either include special updating steps or reset Bk to a divided difference estimate of ∇g whenever no satisfactory progress is made. Here the need for such back-up devices is avoided by a derivative-free line search in the range of g. Assuming that the Bk are generated from an rbitrary B0 by fixed scale updates, we establish superlinear convergence from within any compact level set of γ on which g has a differentiable inverse function g−1.


Sign in / Sign up

Export Citation Format

Share Document