On Regularization of Least Square Problems via Quadratic Constraints

Author(s):  
Majid Fozunbal
2011 ◽  
Vol 141 ◽  
pp. 92-97
Author(s):  
Miao Hu ◽  
Tai Yong Wang ◽  
Bo Geng ◽  
Qi Chen Wang ◽  
Dian Peng Li

Nonlinear least square is one of the unconstrained optimization problems. In order to solve the least square trust region sub-problem, a genetic algorithm (GA) of global convergence was applied, and the premature convergence of genetic algorithms was also overcome through optimizing the search range of GA with trust region method (TRM), and the convergence rate of genetic algorithm was increased by the randomness of the genetic search. Finally, an example of banana function was established to verify the GA, and the results show the practicability and precision of this algorithm.


Author(s):  
Ioannis K. Argyros ◽  
Santhosh George

Abstract We present a local convergence analysis of inexact Gauss-Newton-like method (IGNLM) for solving nonlinear least-squares problems in a Euclidean space setting. The convergence analysis is based on our new idea of restricted convergence domains. Using this idea, we obtain a more precise information on the location of the iterates than in earlier studies leading to smaller majorizing functions. This way, our approach has the following advantages and under the same computational cost as in earlier studies: A large radius of convergence and more precise estimates on the distances involved to obtain a desired error tolerance. That is, we have a larger choice of initial points and fewer iterations are also needed to achieve the error tolerance. Special cases and numerical examples are also presented to show these advantages.


2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Guangbin Wang ◽  
Yanli Du ◽  
Fuping Tan

We present preconditioned generalized accelerated overrelaxation methods for solving weighted linear least square problems. We compare the spectral radii of the iteration matrices of the preconditioned and the original methods. The comparison results show that the preconditioned GAOR methods converge faster than the GAOR method whenever the GAOR method is convergent. Finally, we give a numerical example to confirm our theoretical results.


10.3386/w0165 ◽  
1977 ◽  
Author(s):  
Gene Golub ◽  
Virginia Klema ◽  
G. Stewart

Sign in / Sign up

Export Citation Format

Share Document