scholarly journals ON ITERATIVE METHODS FOR SOLVING NONLINEAR LEAST SQUARES PROBLEMS WITH OPERATOR DECOMPOSITION

2018 ◽  
Vol 26 ◽  
Author(s):  
S Shakho ◽  
H Yarmola
Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 158
Author(s):  
Ioannis K. Argyros ◽  
Stepan Shakhno ◽  
Roman Iakymchuk ◽  
Halyna Yarmola ◽  
Michael I. Argyros

We develop a local convergence of an iterative method for solving nonlinear least squares problems with operator decomposition under the classical and generalized Lipschitz conditions. We consider the case of both zero and nonzero residuals and determine their convergence orders. We use two types of Lipschitz conditions (center and restricted region conditions) to study the convergence of the method. Moreover, we obtain a larger radius of convergence and tighter error estimates than in previous works. Hence, we extend the applicability of this method under the same computational effort.


Author(s):  
Ioannis K. Argyros ◽  
Janak Raj Sharma ◽  
Deepak Kumar

AbstractThe aim of this paper is to expand the applicability of four iterative methods for solving nonlinear least squares problems. The advantages obtained under the same computational cost as in earlier studies, include: larger radius of convergence, tighter error bounds on the distances involved and a better information on the location of the solution.


Author(s):  
Yasunori Aoki ◽  
Ken Hayami ◽  
Kota Toshimoto ◽  
Yuichi Sugiyama

Abstract Parameter estimation problems of mathematical models can often be formulated as nonlinear least squares problems. Typically these problems are solved numerically using iterative methods. The local minimiser obtained using these iterative methods usually depends on the choice of the initial iterate. Thus, the estimated parameter and subsequent analyses using it depend on the choice of the initial iterate. One way to reduce the analysis bias due to the choice of the initial iterate is to repeat the algorithm from multiple initial iterates (i.e. use a multi-start method). However, the procedure can be computationally intensive and is not always used in practice. To overcome this problem, we propose the Cluster Gauss–Newton (CGN) method, an efficient algorithm for finding multiple approximate minimisers of nonlinear-least squares problems. CGN simultaneously solves the nonlinear least squares problem from multiple initial iterates. Then, CGN iteratively improves the approximations from these initial iterates similarly to the Gauss–Newton method. However, it uses a global linear approximation instead of the Jacobian. The global linear approximations are computed collectively among all the iterates to minimise the computational cost associated with the evaluation of the mathematical model. We use physiologically based pharmacokinetic (PBPK) models used in pharmaceutical drug development to demonstrate its use and show that CGN is computationally more efficient and more robust against local minima compared to the standard Levenberg–Marquardt method, as well as state-of-the art multi-start and derivative-free methods.


Heliyon ◽  
2021 ◽  
pp. e07499
Author(s):  
Mahmoud Muhammad Yahaya ◽  
Poom Kumam ◽  
Aliyu Muhammed Awwal ◽  
Sani Aji

Sign in / Sign up

Export Citation Format

Share Document