scholarly journals A NOTE ON THE SEMILOCAL CONVERGENCE OF CHEBYSHEV’S METHOD

2012 ◽  
Vol 88 (1) ◽  
pp. 98-105 ◽  
Author(s):  
MANUEL A. DILONÉ ◽  
MARTÍN GARCÍA-OLIVO ◽  
JOSÉ M. GUTIÉRREZ

AbstractIn this paper we develop a Kantorovich-like theory for Chebyshev’s method, a well-known iterative method for solving nonlinear equations in Banach spaces. We improve the results obtained previously by considering Chebyshev’s method as an element of a family of iterative processes.

2013 ◽  
Vol 10 (04) ◽  
pp. 1350021 ◽  
Author(s):  
M. PRASHANTH ◽  
D. K. GUPTA

A continuation method is a parameter based iterative method establishing a continuous connection between two given functions/operators and used for solving nonlinear equations in Banach spaces. The semilocal convergence of a continuation method combining Chebyshev's method and Convex acceleration of Newton's method for solving nonlinear equations in Banach spaces is established in [J. A. Ezquerro, J. M. Gutiérrez and M. A. Hernández [1997] J. Appl. Math. Comput.85: 181–199] using majorizing sequences under the assumption that the second Frechet derivative satisfies the Lipschitz continuity condition. The aim of this paper is to use recurrence relations instead of majorizing sequences to establish the convergence analysis of such a method. This leads to a simpler approach with improved results. An existence–uniqueness theorem is given. Also, a closed form of error bounds is derived in terms of a real parameter α ∈ [0, 1]. Four numerical examples are worked out to demonstrate the efficacy of our convergence analysis. On comparing the existence and uniqueness region and error bounds for the solution obtained by our analysis with those obtained by using majorizing sequences, it is found that our analysis gives better results in three examples, whereas in one example it gives the same results. Further, we have observed that for particular values of the α, our analysis reduces to those for Chebyshev's method (α = 0) and Convex acceleration of Newton's method (α = 1) respectively with improved results.


Mathematics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 83
Author(s):  
José M. Gutiérrez ◽  
Miguel Á. Hernández-Verón

In this work, we present an application of Newton’s method for solving nonlinear equations in Banach spaces to a particular problem: the approximation of the inverse operators that appear in the solution of Fredholm integral equations. Therefore, we construct an iterative method with quadratic convergence that does not use either derivatives or inverse operators. Consequently, this new procedure is especially useful for solving non-homogeneous Fredholm integral equations of the first kind. We combine this method with a technique to find the solution of Fredholm integral equations with separable kernels to obtain a procedure that allows us to approach the solution when the kernel is non-separable.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-11
Author(s):  
Abhimanyu Kumar ◽  
Dharmendra K. Gupta ◽  
Eulalia Martínez ◽  
Sukhjit Singh

The semilocal and local convergence analyses of a two-step iterative method for nonlinear nondifferentiable operators are described in Banach spaces. The recurrence relations are derived under weaker conditions on the operator. For semilocal convergence, the domain of the parameters is obtained to ensure guaranteed convergence under suitable initial approximations. The applicability of local convergence is extended as the differentiability condition on the involved operator is avoided. The region of accessibility and a way to enlarge the convergence domain are provided. Theorems are given for the existence-uniqueness balls enclosing the unique solution. Finally, some numerical examples including nonlinear Hammerstein type integral equations are worked out to validate the theoretical results.


2012 ◽  
Vol 220-223 ◽  
pp. 2585-2588
Author(s):  
Zhong Yong Hu ◽  
Fang Liang ◽  
Lian Zhong Li ◽  
Rui Chen

In this paper, we present a modified sixth order convergent Newton-type method for solving nonlinear equations. It is free from second derivatives, and requires three evaluations of the functions and two evaluations of derivatives per iteration. Hence the efficiency index of the presented method is 1.43097 which is better than that of classical Newton’s method 1.41421. Several results are given to illustrate the advantage and efficiency the algorithm.


Sign in / Sign up

Export Citation Format

Share Document