scholarly journals An Efficient Three-Term Iterative Method for Estimating Linear Approximation Models in Regression Analysis

Mathematics ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 977
Author(s):  
Siti Farhana Husin ◽  
Mustafa Mamat ◽  
Mohd Asrul Hery Ibrahim ◽  
Mohd Rivaie

This study employs exact line search iterative algorithms for solving large scale unconstrained optimization problems in which the direction is a three-term modification of iterative method with two different scaled parameters. The objective of this research is to identify the effectiveness of the new directions both theoretically and numerically. Sufficient descent property and global convergence analysis of the suggested methods are established. For numerical experiment purposes, the methods are compared with the previous well-known three-term iterative method and each method is evaluated over the same set of test problems with different initial points. Numerical results show that the performances of the proposed three-term methods are more efficient and superior to the existing method. These methods could also produce an approximate linear regression equation to solve the regression model. The findings of this study can help better understanding of the applicability of numerical algorithms that can be used in estimating the regression model.

Lately, many large-scale unconstrained optimization problems rely upon nonlinear conjugate gradient (CG) methods. Many areas such as engineering, and computer science have benefited because of its simplicity, fast and low memory requirements. Many modified coefficients have appeared recently, all of which to improve these methods. This paper considers an extension conjugate gradient method of PolakRibière-Polyak using exact line search to show that it holds for some properties such as sufficient descent and global convergence. A set of 113 test problems is used to evaluate the performance of the proposed method and get compared to other existing methods using the same line search.


2014 ◽  
Vol 530-531 ◽  
pp. 367-371
Author(s):  
Ting Feng Li ◽  
Yu Ting Zhang ◽  
Sheng Hui Yan

In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses a global convergence property even without convexity assumption on the objective function. The implementations of the algorithm on CUTE test problems are reported, which suggest that a slight improvement has been achieved.


2018 ◽  
Vol 7 (3.28) ◽  
pp. 92
Author(s):  
Talat Alkouli ◽  
Mustafa Mamat ◽  
Mohd Rivaie ◽  
Puspa Liza Ghazali

In this paper, an efficient modification of nonlinear conjugate gradient method and an associated implementation, based on an exact line search, are proposed and analyzed to solve large-scale unconstrained optimization problems. The method satisfies the sufficient descent property. Furthermore, global convergence result is proved. Computational results for a set of unconstrained optimization test problems, some of them from CUTE library, showed that this new conjugate gradient algorithm seems to converge more stable and outperforms the other similar methods in many situations.   


2018 ◽  
Vol 13 (03) ◽  
pp. 2050059
Author(s):  
Amina Boumediene ◽  
Rachid Benzine ◽  
Mohammed Belloufi

Nonlinear conjugate gradient (CG) methods are widely used for solving large scale unconstrained optimization problems. Many studies have been devoted to develop and improve these methods. In this paper, we aim to study the global convergence of the BBB conjugate gradient method with exact line search.


2018 ◽  
Vol 7 (3.28) ◽  
pp. 84 ◽  
Author(s):  
Nurul Aini ◽  
Nurul Hajar ◽  
Mohd Rivaie ◽  
Mustafa Mamat

The conjugate gradient (CG) method is a well-known solver for large-scale unconstrained optimization problems. In this paper, a modified CG method based on AMR* and CD method is presented. The resulting algorithm for the new CG method is proved to be globally convergent under exact line search both under some mild conditions. Comparisons of numerical performance are made involving the new method and four other CG methods. The results show that the proposed method is more efficient.  


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Min Sun ◽  
Jing Liu

Recently, Zhang et al. proposed a sufficient descent Polak-Ribière-Polyak (SDPRP) conjugate gradient method for large-scale unconstrained optimization problems and proved its global convergence in the sense thatlim infk→∞∥∇f(xk)∥=0when an Armijo-type line search is used. In this paper, motivated by the line searches proposed by Shi et al. and Zhang et al., we propose two new Armijo-type line searches and show that the SDPRP method has strong convergence in the sense thatlimk→∞∥∇f(xk)∥=0under the two new line searches. Numerical results are reported to show the efficiency of the SDPRP with the new Armijo-type line searches in practical computation.


2014 ◽  
Vol 19 (4) ◽  
pp. 469-490 ◽  
Author(s):  
Hamid Esmaeili ◽  
Morteza Kimiaei

In this study, we propose a trust-region-based procedure to solve unconstrained optimization problems that take advantage of the nonmonotone technique to introduce an efficient adaptive radius strategy. In our approach, the adaptive technique leads to decreasing the total number of iterations, while utilizing the structure of nonmonotone formula helps us to handle large-scale problems. The new algorithm preserves the global convergence and has quadratic convergence under suitable conditions. Preliminary numerical experiments on standard test problems indicate the efficiency and robustness of the proposed approach for solving unconstrained optimization problems.


Sign in / Sign up

Export Citation Format

Share Document