roots of equations
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Guiying Ning ◽  
Yongquan Zhou

AbstractThe problem of finding roots of equations has always been an important research problem in the fields of scientific and engineering calculations. For the standard differential evolution algorithm cannot balance the convergence speed and the accuracy of the solution, an improved differential evolution algorithm is proposed. First, the one-half rule is introduced in the mutation process, that is, half of the individuals perform differential evolutionary mutation, and the other half perform evolutionary strategy reorganization, which increases the diversity of the population and avoids premature convergence of the algorithm; Second, set up an adaptive mutation operator and a crossover operator to prevent the algorithm from falling into the local optimum and improve the accuracy of the solution. Finally, classical high-order algebraic equations and nonlinear equations are selected for testing, and compared with other algorithms. The results show that the improved algorithm has higher solution accuracy and robustness, and has a faster convergence speed. It has outstanding effects in finding roots of equations, and provides an effective method for engineering and scientific calculations.


Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 78
Author(s):  
Ankush Aggarwal ◽  
Sanjay Pant

Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems.


Mathematics ◽  
2019 ◽  
Vol 7 (8) ◽  
pp. 765 ◽  
Author(s):  
Abed ◽  
Taresh

Iterative methods were employed to obtain solutions of linear and non-linear systems of equations, solutions of differential equations, and roots of equations. In this paper, it was proved that s-iteration with error and Picard–Mann iteration with error converge strongly to the unique fixed point of Lipschitzian strongly pseudo-contractive mapping. This convergence was almost F-stable and F-stable. Applications of these results have been given to the operator equations Fx=f and x+Fx=f, where F is a strongly accretive and accretive mappings of X into itself.


Algebra ◽  
2018 ◽  
pp. 221-244
Author(s):  
John Scherk
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document