scholarly journals A family of two-point methods with memory for solving nonlinear equations

2011 ◽  
Vol 5 (2) ◽  
pp. 298-317 ◽  
Author(s):  
Miodrag Petkovic ◽  
Jovana Dzunic ◽  
Ljiljana Petkovic

An efficient family of two-point derivative free methods with memory for solving nonlinear equations is presented. It is proved that the convergence order of the proposed family is increased from 4 to at least 2 + ?6 ? 4.45, 5, 1/2 (5 + ?33) ? 5.37 and 6, depending on the accelerating technique. The increase of convergence order is attained using a suitable accelerating technique by varying a free parameter in each iteration. The improvement of convergence rate is achieved without any additional function evaluations meaning that the proposed methods with memory are very efficient. Moreover, the presented methods are more efficient than all existing methods known in literature in the class of two-point methods and three-point methods of optimal order eight. Numerical examples and the comparison with the existing two-point methods are included to confirm theoretical results and high computational efficiency. 2010 Mathematics Subject Classification. 65H05

2018 ◽  
Vol 15 (03) ◽  
pp. 1850010 ◽  
Author(s):  
Janak Raj Sharma ◽  
Ioannis K. Argyros ◽  
Deepak Kumar

We develop a general class of derivative free iterative methods with optimal order of convergence in the sense of Kung–Traub hypothesis for solving nonlinear equations. The methods possess very simple design, which makes them easy to remember and hence easy to implement. The Methodology is based on quadratically convergent Traub–Steffensen scheme and further developed by using Padé approximation. Local convergence analysis is provided to show that the iterations are locally well defined and convergent. Numerical examples are provided to confirm the theoretical results and to show the good performance of new methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
T. Lotfi ◽  
F. Soleymani ◽  
Z. Noori ◽  
A. Kılıçman ◽  
F. Khaksar Haghani

Two families of derivative-free methods without memory for approximating a simple zero of a nonlinear equation are presented. The proposed schemes have an accelerator parameter with the property that it can increase the convergence rate without any new functional evaluations. In this way, we construct a method with memory that increases considerably efficiency index from81/4≈1.681to121/4≈1.861. Numerical examples and comparison with the existing methods are included to confirm theoretical results and high computational efficiency.


2015 ◽  
Vol 12 (01) ◽  
pp. 1350093 ◽  
Author(s):  
J. R. Sharma ◽  
Puneet Gupta

We present derivative free multipoint methods of optimal eighth and sixteenth order convergence for solving nonlinear equations. The schemes are based on derivative free two-point methods proposed by Petković et al. [Petković, M. S., Džunić, J. and Petković, L. D. [2011] "A family of two-point methods with memory for solving nonlinear equations," Appl. Anal. Discrete Math.5, 298–317], which further developed by using rational approximations. Extending the work further, we explore four-point methods with memory with increasing order of convergence from the basic four-point scheme without memory. The order is increased from 16 of the basic method to 20, 22, 23, 23.662, and 24 by suitable variation of a free parameter in each iterative step. This increase in the convergence order is achieved without any additional function evaluations and therefore, the methods with memory possess better computational efficiency than the methods without memory. Numerical examples are presented and the performance is compared with the existing optimal three and four-point methods. Computational results and comparison with the existing methods confirm efficient and robust character of present methods.


2014 ◽  
Vol 11 (05) ◽  
pp. 1350078 ◽  
Author(s):  
XIAOFENG WANG ◽  
TIE ZHANG

In this paper, we present some three-point Newton-type iterative methods without memory for solving nonlinear equations by using undetermined coefficients method. The order of convergence of the new methods without memory is eight requiring the evaluations of three functions and one first-order derivative in per full iteration. Hence, the new methods are optimal according to Kung and Traubs conjecture. Based on the presented methods without memory, we present two families of Newton-type iterative methods with memory. Further accelerations of convergence speed are obtained by using a self-accelerating parameter. This self-accelerating parameter is calculated by the Hermite interpolating polynomial and is applied to improve the order of convergence of the Newton-type method. The corresponding R-order of convergence is increased from 8 to 9, [Formula: see text] and 10. The increase of convergence order is attained without any additional calculations so that the two families of the methods with memory possess a very high computational efficiency. Numerical examples are demonstrated to confirm theoretical results.


2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Rajinder Thukral

A new family of eighth-order derivative-free methods for solving nonlinear equations is presented. It is proved that these methods have the convergence order of eight. These new methods are derivative-free and only use four evaluations of the function per iteration. In fact, we have obtained the optimal order of convergence which supports the Kung and Traub conjecture. Kung and Traub conjectured that the multipoint iteration methods, without memory based onnevaluations could achieve optimal convergence order of . Thus, we present new derivative-free methods which agree with Kung and Traub conjecture for . Numerical comparisons are made to demonstrate the performance of the methods presented.


Mathematics ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 108 ◽  
Author(s):  
Xiaofeng Wang ◽  
Yuxi Tao

A new Newton method with memory is proposed by using a variable self-accelerating parameter. Firstly, a modified Newton method without memory with invariant parameter is constructed for solving nonlinear equations. Substituting the invariant parameter of Newton method without memory by a variable self-accelerating parameter, we obtain a novel Newton method with memory. The convergence order of the new Newton method with memory is 1 + 2 . The acceleration of the convergence rate is attained without any additional function evaluations. The main innovation is that the self-accelerating parameter is constructed by a simple way. Numerical experiments show the presented method has faster convergence speed than existing methods.


Mathematics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 540 ◽  
Author(s):  
Xiaofeng Wang ◽  
Qiannan Fan

In this paper, a self-accelerating type method is proposed for solving nonlinear equations, which is a modified Ren’s method. A simple way is applied to construct a variable self-accelerating parameter of the new method, which does not increase any computational costs. The highest convergence order of new method is 2 + 6 ≈ 4.4495 . Numerical experiments are made to show the performance of the new method, which supports the theoretical results.


2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
R. Thukral

A new family of eighth-order derivative-free methods for solving nonlinear equations is presented. It is proved that these methods have the convergence order of eight. These new methods are derivative-free and only use four evaluations of the function per iteration. In fact, we have obtained the optimal order of convergence which supports the Kung and Traub conjecture. Kung and Traub conjectured that the multipoint iteration methods, without memory based on evaluations, could achieve optimal convergence order . Thus, we present new derivative-free methods which agree with Kung and Traub conjecture for . Numerical comparisons are made to demonstrate the performance of the methods presented.


2014 ◽  
Vol 2014 ◽  
pp. 1-9
Author(s):  
T. Lotfi ◽  
K. Mahdiani ◽  
Z. Noori ◽  
F. Khaksar Haghani ◽  
S. Shateyi

A class of derivative-free methods without memory for approximating a simple zero of a nonlinear equation is presented. The proposed class uses four function evaluations per iteration with convergence order eight. Therefore, it is an optimal three-step scheme without memory based on Kung-Traub conjecture. Moreover, the proposed class has an accelerator parameter with the property that it can increase the convergence rate from eight to twelve without any new functional evaluations. Thus, we construct a with memory method that increases considerably efficiency index from81/4≈1.681to121/4≈1.861. Illustrations are also included to support the underlying theory.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
M. Sharifi ◽  
S. Karimi Vanani ◽  
F. Khaksar Haghani ◽  
M. Arab ◽  
S. Shateyi

The purpose of this paper is to derive and discuss a three-step iterative expression for solving nonlinear equations. In fact, we derive a derivative-free form for one of the existing optimal eighth-order methods and preserve its convergence order. Theoretical results will be upheld by numerical experiments.


Sign in / Sign up

Export Citation Format

Share Document