scholarly journals Hybrid Levenberg–Marquardt and weak-constraint ensemble Kalman smoother method

2016 ◽  
Vol 23 (2) ◽  
pp. 59-73 ◽  
Author(s):  
J. Mandel ◽  
E. Bergou ◽  
S. Gürol ◽  
S. Gratton ◽  
I. Kasanický

Abstract. The ensemble Kalman smoother (EnKS) is used as a linear least-squares solver in the Gauss–Newton method for the large nonlinear least-squares system in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Furthermore, adding a regularization term results in replacing the Gauss–Newton method, which may diverge, by the Levenberg–Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS. The method is illustrated on the Lorenz 63 model and a two-level quasi-geostrophic model.

2015 ◽  
Vol 2 (3) ◽  
pp. 865-902 ◽  
Author(s):  
J. Mandel ◽  
E. Bergou ◽  
S. Gürol ◽  
S. Gratton

Abstract. We propose to use the ensemble Kalman smoother (EnKS) as the linear least squares solver in the Gauss–Newton method for the large nonlinear least squares in incremental 4DVAR. The ensemble approach is naturally parallel over the ensemble members and no tangent or adjoint operators are needed. Further, adding a regularization term results in replacing the Gauss–Newton method, which may diverge, by the Levenberg–Marquardt method, which is known to be convergent. The regularization is implemented efficiently as an additional observation in the EnKS. The method is illustrated on the Lorenz 63 and the two-level quasi-geostrophic model problems.


Author(s):  
S. J. Wright ◽  
J. N. Holt

AbstractA method for solving problems of the form is presented. The approach of Levenberg and Marquardt is used, except that the linear least squares subproblem arising at each iteration is not solved exactly, but only to within a certain tolerance. The method is most suited to problems in which the Jacobian matrix is sparse. Use is made of the iterative algorithm LSQR of Paige and Saunders for sparse linear least squares.A global convergence result can be proven, and under certain conditions it can be shown that the method converges quadratically when the sum of squares at the optimal point is zero.Numerical test results for problems of varying residual size are given.


2021 ◽  
Vol 7 (1) ◽  
pp. 1241-1256
Author(s):  
Lin Zheng ◽  
◽  
Liang Chen ◽  
Yanfang Ma ◽  
◽  
...  

<abstract><p>The Levenberg-Marquardt method is one of the most important methods for solving systems of nonlinear equations and nonlinear least-squares problems. It enjoys a quadratic convergence rate under the local error bound condition. Recently, to solve nonzero-residue nonlinear least-squares problem, Behling et al. propose a modified Levenberg-Marquardt method with at least superlinearly convergence under a new error bound condtion <sup>[<xref ref-type="bibr" rid="b3">3</xref>]</sup>. To extend their results for systems of nonlinear equations, by choosing the LM parameters adaptively, we propose an efficient variant of the Levenberg-Marquardt method and prove its quadratic convergence under the new error bound condition. We also investigate its global convergence by using the Wolfe line search. The effectiveness of the new method is validated by some numerical experiments.</p></abstract>


Author(s):  
Karl Kunisch ◽  
Philip Trautmann

AbstractIn this work we discuss the reconstruction of cardiac activation instants based on a viscous Eikonal equation from boundary observations. The problem is formulated as a least squares problem and solved by a projected version of the Levenberg–Marquardt method. Moreover, we analyze the well-posedness of the state equation and derive the gradient of the least squares functional with respect to the activation instants. In the numerical examples we also conduct an experiment in which the location of the activation sites and the activation instants are reconstructed jointly based on an adapted version of the shape gradient method from (J. Math. Biol. 79, 2033–2068, 2019). We are able to reconstruct the activation instants as well as the locations of the activations with high accuracy relative to the noise level.


2020 ◽  
Vol 48 (4) ◽  
pp. 987-1003
Author(s):  
Hans Georg Bock ◽  
Jürgen Gutekunst ◽  
Andreas Potschka ◽  
María Elena Suaréz Garcés

AbstractJust as the damped Newton method for the numerical solution of nonlinear algebraic problems can be interpreted as a forward Euler timestepping on the Newton flow equations, the damped Gauß–Newton method for nonlinear least squares problems is equivalent to forward Euler timestepping on the corresponding Gauß–Newton flow equations. We highlight the advantages of the Gauß–Newton flow and the Gauß–Newton method from a statistical and a numerical perspective in comparison with the Newton method, steepest descent, and the Levenberg–Marquardt method, which are respectively equivalent to Newton flow forward Euler, gradient flow forward Euler, and gradient flow backward Euler. We finally show an unconditional descent property for a generalized Gauß–Newton flow, which is linked to Krylov–Gauß–Newton methods for large-scale nonlinear least squares problems. We provide numerical results for large-scale problems: An academic generalized Rosenbrock function and a real-world bundle adjustment problem from 3D reconstruction based on 2D images.


Sign in / Sign up

Export Citation Format

Share Document