Some substantial modifications and improvements for derivative-free iterative methods and derivative-free transformation for multiple zeros

2006 ◽  
Vol 181 (2) ◽  
pp. 1585-1599 ◽  
Author(s):  
Xinyuan Wu ◽  
Jianlin Xia
Axioms ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 65 ◽  
Author(s):  
Deepak Kumar ◽  
Janak Raj Sharma ◽  
Clemente Cesarano

Numerous higher-order methods with derivative evaluations are accessible in the literature for computing multiple zeros. However, higher-order methods without derivatives are very rare for multiple zeros. Encouraged by this fact, we present a family of third-order derivative-free iterative methods for multiple zeros that require only evaluations of three functions per iteration. Convergence of the proposed class is demonstrated by means of using a graphical tool, namely basins of attraction. Applicability of the methods is demonstrated through numerical experimentation on different functions that illustrates the efficient behavior. Comparison of numerical results shows that the presented iterative methods are good competitors to the existing techniques.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1242
Author(s):  
Ramandeep Behl ◽  
Sonia Bhalla ◽  
Eulalia Martínez ◽  
Majed Aali Alsulami

There is no doubt that the fourth-order King’s family is one of the important ones among its counterparts. However, it has two major problems: the first one is the calculation of the first-order derivative; secondly, it has a linear order of convergence in the case of multiple roots. In order to improve these complications, we suggested a new King’s family of iterative methods. The main features of our scheme are the optimal convergence order, being free from derivatives, and working for multiple roots (m≥2). In addition, we proposed a main theorem that illustrated the fourth order of convergence. It also satisfied the optimal Kung–Traub conjecture of iterative methods without memory. We compared our scheme with the latest iterative methods of the same order of convergence on several real-life problems. In accordance with the computational results, we concluded that our method showed superior behavior compared to the existing methods.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Alicia Cordero ◽  
Moin-ud-Din Junjua ◽  
Juan R. Torregrosa ◽  
Nusrat Yasmin ◽  
Fiza Zafar

We construct a family of derivative-free optimal iterative methods without memory to approximate a simple zero of a nonlinear function. Error analysis demonstrates that the without-memory class has eighth-order convergence and is extendable to with-memory class. The extension of new family to the with-memory one is also presented which attains the convergence order 15.5156 and a very high efficiency index 15.51561/4≈1.9847. Some particular schemes of the with-memory family are also described. Numerical examples and some dynamical aspects of the new schemes are given to support theoretical results.


2019 ◽  
Vol 28 (1) ◽  
pp. 19-26
Author(s):  
IOANNIS K. ARGYROS ◽  
◽  
SANTHOSH GEORGE ◽  

We present the local as well as the semi-local convergence of some iterative methods free of derivatives for Banach space valued operators. These methods contain the secant and the Kurchatov method as special cases. The convergence is based on weak hypotheses specializing to Lipschitz continuous or Holder continuous hypotheses. The results are of theoretical and practical interest. In particular the method is compared favorably ¨ to other methods using concrete numerical examples to solve systems of equations containing a nondifferentiable term.


Algorithms ◽  
2016 ◽  
Vol 9 (1) ◽  
pp. 14 ◽  
Author(s):  
Xiaofeng Wang ◽  
Xiaodong Fan

Author(s):  
Anderas Griewank

Iterative methods for solving a square system of nonlinear equations g(x) = 0 often require that the sum of squares residual γ (x) ≡ ½∥g(x)∥2 be reduced at each step. Since the gradient of γ depends on the Jacobian ∇g, this stabilization strategy is not easily implemented if only approximations Bk to ∇g are available. Therefore most quasi-Newton algorithms either include special updating steps or reset Bk to a divided difference estimate of ∇g whenever no satisfactory progress is made. Here the need for such back-up devices is avoided by a derivative-free line search in the range of g. Assuming that the Bk are generated from an rbitrary B0 by fixed scale updates, we establish superlinear convergence from within any compact level set of γ on which g has a differentiable inverse function g−1.


Sign in / Sign up

Export Citation Format

Share Document