scholarly journals Geometrically Constructed Families of Newton's Method for Unconstrained Optimization and Nonlinear Equations

Author(s):  
Sanjeev Kumar ◽  
Vinay Kanwar ◽  
Sushil Kumar Tomar ◽  
Sukhjit Singh

One-parameter families of Newton's iterative method for the solution of nonlinear equations and its extension to unconstrained optimization problems are presented in the paper. These methods are derived by implementing approximations through a straight line and through a parabolic curve in the vicinity of the root. The presented variants are found to yield better performance than Newton's method, in addition that they overcome its limitations.

2012 ◽  
Vol 220-223 ◽  
pp. 2585-2588
Author(s):  
Zhong Yong Hu ◽  
Fang Liang ◽  
Lian Zhong Li ◽  
Rui Chen

In this paper, we present a modified sixth order convergent Newton-type method for solving nonlinear equations. It is free from second derivatives, and requires three evaluations of the functions and two evaluations of derivatives per iteration. Hence the efficiency index of the presented method is 1.43097 which is better than that of classical Newton’s method 1.41421. Several results are given to illustrate the advantage and efficiency the algorithm.


2019 ◽  
Vol 53 (2) ◽  
pp. 657-666
Author(s):  
Mohammad Afzalinejad

A problem with rapidly convergent methods for unconstrained optimization like the Newton’s method is the computational difficulties arising specially from the second derivative. In this paper, a class of methods for solving unconstrained optimization problems is proposed which implicitly applies approximations to derivatives. This class of methods is based on a modified Steffensen method for finding roots of a function and attempts to make a quadratic model for the function without using the second derivative. Two methods of this kind with non-expensive computations are proposed which just use first derivative of the function. Derivative-free versions of these methods are also suggested for the cases where the gradient formulas are not available or difficult to evaluate. The theory as well as numerical examinations confirm the rapid convergence of this class of methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-18 ◽  
Author(s):  
Fiza Zafar ◽  
Nawab Hussain ◽  
Zirwah Fatimah ◽  
Athar Kharal

We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton’s method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.


2019 ◽  
Vol 17 (01) ◽  
pp. 1843005 ◽  
Author(s):  
Rahmatjan Imin ◽  
Ahmatjan Iminjan

In this paper, based on the basic principle of the SPH method’s kernel approximation, a new kernel approximation was constructed to compute first-order derivative through Taylor series expansion. Derivative in Newton’s method was replaced to propose a new SPH iterative method for solving nonlinear equations. The advantage of this method is that it does not require any evaluation of derivatives, which overcame the shortcoming of Newton’s method. Quadratic convergence of new method was proved and a variety of numerical examples were given to illustrate that the method has the same computational efficiency as Newton’s method.


Mathematics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 83
Author(s):  
José M. Gutiérrez ◽  
Miguel Á. Hernández-Verón

In this work, we present an application of Newton’s method for solving nonlinear equations in Banach spaces to a particular problem: the approximation of the inverse operators that appear in the solution of Fredholm integral equations. Therefore, we construct an iterative method with quadratic convergence that does not use either derivatives or inverse operators. Consequently, this new procedure is especially useful for solving non-homogeneous Fredholm integral equations of the first kind. We combine this method with a technique to find the solution of Fredholm integral equations with separable kernels to obtain a procedure that allows us to approach the solution when the kernel is non-separable.


2018 ◽  
Vol 6 (3) ◽  
pp. 354-367 ◽  
Author(s):  
Abdelmonem M. Ibrahim ◽  
Mohamed A. Tawhid

Abstract In this study, we propose a new hybrid algorithm consisting of two meta-heuristic algorithms; Differential Evolution (DE) and the Monarch Butterfly Optimization (MBO). This hybrid is called DEMBO. Both of the meta-heuristic algorithms are typically used to solve nonlinear systems and unconstrained optimization problems. DE is a common metaheuristic algorithm that searches large areas of candidate space. Unfortunately, it often requires more significant numbers of function evaluations to get the optimal solution. As for MBO, it is known for its time-consuming fitness functions, but it traps at the local minima. In order to overcome all of these disadvantages, we combine the DE with MBO and propose DEMBO which can obtain the optimal solutions for the majority of nonlinear systems as well as unconstrained optimization problems. We apply our proposed algorithm, DEMBO, on nine different, unconstrained optimization problems and eight well-known nonlinear systems. Our results, when compared with other existing algorithms in the literature, demonstrate that DEMBO gives the best results for the majority of the nonlinear systems and unconstrained optimization problems. As such, the experimental results demonstrate the efficiency of our hybrid algorithm in comparison to the known algorithms. Highlights This paper proposes a new hybridization of differential evolution and monarch butterfly optimization. Solve system of nonlinear equations and unconstrained optimization problem. The efficiency and effectiveness of our algorithm are provided. Experimental results prove the superiority of our algorithm over the state-of-the-arts.


2012 ◽  
Vol 542-543 ◽  
pp. 1019-1022
Author(s):  
Han Li

In this paper, we present and analyze a new iterative method for solving nonlinear equations. It is proved that the method is six-order convergent. The algorithm is free from second derivatives, and it requires three evaluations of the functions and two evaluations of derivatives in each iteration. The efficiency index of the presented method is 1.431 which is better than that of classical Newton’s method 1.414. Some numerical experiments illustrate that the proposed method is more efficient and performs better than classical Newton's method and some other methods.


2017 ◽  
Vol 10 (1) ◽  
pp. 144-150 ◽  
Author(s):  
V.B Vatti ◽  
Ramadevi Sri ◽  
M.S Mylapalli

In this paper, we suggest and discuss an iterative method for solving nonlinear equations of the type f(x)=0 having eighteenth order convergence. This new technique based on Newton’s method and extrapolated Newton’s method. This method is compared with the existing ones through some numerical examples to exhibit its superiority. AMS Subject Classification: 41A25, 65K05, 65H05.


Sign in / Sign up

Export Citation Format

Share Document