Explicit convergence regions of Newton's method and Chebyshev's method for the matrix pth root

2019 ◽  
Vol 583 ◽  
pp. 63-76
Author(s):  
Chun-Hua Guo
2009 ◽  
Vol 21 (5) ◽  
pp. 1415-1433 ◽  
Author(s):  
P.-A. Absil ◽  
M. Ishteva ◽  
L. De Lathauwer ◽  
S. Van Huffel

Newton's method for solving the matrix equation [Formula: see text] runs up against the fact that its zeros are not isolated. This is due to a symmetry of F by the action of the orthogonal group. We show how differential-geometric techniques can be exploited to remove this symmetry and obtain a “geometric” Newton algorithm that finds the zeros of F. The geometric Newton method does not suffer from the degeneracy issue that stands in the way of the original Newton method.


2013 ◽  
Vol 11 (03) ◽  
pp. 1350009 ◽  
Author(s):  
J. A. EZQUERRO ◽  
A. GRAU ◽  
M. GRAU-SÁNCHEZ ◽  
M. A. HERNÁNDEZ

From some modifications of Chebyshev's method, we consider a uniparametric family of iterative methods that are more efficient than Newton's method, and we then construct two iterative methods in a similar way to the Secant method from Newton's method. These iterative methods do not use derivatives in their algorithms and one of them is more efficient than the Secant method, which is the classical method with this feature.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Chun-Mei Li ◽  
Shu-Qian Shen

Two new algorithms are proposed to compute the nonsingular square root of a matrixA. Convergence theorems and stability analysis for these new algorithms are given. Numerical results show that these new algorithms are feasible and effective.


2013 ◽  
Vol 10 (04) ◽  
pp. 1350021 ◽  
Author(s):  
M. PRASHANTH ◽  
D. K. GUPTA

A continuation method is a parameter based iterative method establishing a continuous connection between two given functions/operators and used for solving nonlinear equations in Banach spaces. The semilocal convergence of a continuation method combining Chebyshev's method and Convex acceleration of Newton's method for solving nonlinear equations in Banach spaces is established in [J. A. Ezquerro, J. M. Gutiérrez and M. A. Hernández [1997] J. Appl. Math. Comput.85: 181–199] using majorizing sequences under the assumption that the second Frechet derivative satisfies the Lipschitz continuity condition. The aim of this paper is to use recurrence relations instead of majorizing sequences to establish the convergence analysis of such a method. This leads to a simpler approach with improved results. An existence–uniqueness theorem is given. Also, a closed form of error bounds is derived in terms of a real parameter α ∈ [0, 1]. Four numerical examples are worked out to demonstrate the efficacy of our convergence analysis. On comparing the existence and uniqueness region and error bounds for the solution obtained by our analysis with those obtained by using majorizing sequences, it is found that our analysis gives better results in three examples, whereas in one example it gives the same results. Further, we have observed that for particular values of the α, our analysis reduces to those for Chebyshev's method (α = 0) and Convex acceleration of Newton's method (α = 1) respectively with improved results.


2019 ◽  
Vol 16 ◽  
pp. 8330-8333
Author(s):  
Hamideh Eskandari

In this paper, we present one of the most important numerical analysis problems that we find in the roots of the nonlinear equation. In numerical analysis and numerical computing, there are many methods that we can approximate the roots of this equation. We present here several different methods, such as Halley's method, Chebyshev's method, Newton's method, and other new methods presented in papers and journals, and compare them. In the end, we get a good and attractive result.


2021 ◽  
Vol 25 (2(36)) ◽  
pp. 75-82
Author(s):  
V. V. Verbitskyi ◽  
A. G. Huk

Newton's method for calculating the eigenvalue and the corresponding eigenvector of a symmetric real matrix is considered. The nonlinear system of equations solved by Newton's method consists of an equation that determines the eigenvalue and eigenvector of the matrix and the normalization condition for the eigenvector. The method allows someone to simultaneously calculate the eigenvalue and the corresponding eigenvector. Initial approximations for the eigenvalue and the corresponding eigenvector can be found by the power method or by the reverse iteration with shift. A simple proof of the convergence of Newton's method in a neighborhood of a simple eigenvalue is proposed. It is shown that the method has a quadratic convergence rate. In terms of computational costs per iteration, Newton's method is comparable to the reverse iteration method with the Rayleigh ratio. Unlike reverse iteration, Newton's method allows to compute the eigenpair with better accuracy.


Sign in / Sign up

Export Citation Format

Share Document