scholarly journals Two Optimal Eighth-Order Derivative-Free Classes of Iterative Methods

2012 ◽  
Vol 2012 ◽  
pp. 1-14 ◽  
Author(s):  
F. Soleymani ◽  
S. Shateyi

Optimization problems defined by (objective) functions for which derivatives are unavailable or available at an expensive cost are emerging in computational science. Due to this, the main aim of this paper is to attain as high as possible of local convergence order by using fixed number of (functional) evaluations to find efficient solvers for one-variable nonlinear equations, while the procedure to achieve this goal is totally free from derivative. To this end, we consider the fourth-order uniparametric family of Kung and Traub to suggest and demonstrate two classes of three-step derivative-free methods using only four pieces of information per full iteration to reach the optimal order eight and the optimal efficiency index 1.682. Moreover, a large number of numerical tests are considered to confirm the applicability and efficiency of the produced methods from the new classes.

2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
F. Soleymani ◽  
S. Karimi Vanani ◽  
M. Jamali Paghaleh

A class of three-step eighth-order root solvers is constructed in this study. Our aim is fulfilled by using an interpolatory rational function in the third step of a three-step cycle. Each method of the class reaches the optimal efficiency index according to the Kung-Traub conjecture concerning multipoint iterative methods without memory. Moreover, the class is free from derivative calculation per full iteration, which is important in engineering problems. One method of the class is established analytically. To test the derived methods from the class, we apply them to a lot of nonlinear scalar equations. Numerical examples suggest that the novel class of derivative-free methods is better than the existing methods of the same type in the literature.


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Alicia Cordero ◽  
Moin-ud-Din Junjua ◽  
Juan R. Torregrosa ◽  
Nusrat Yasmin ◽  
Fiza Zafar

We construct a family of derivative-free optimal iterative methods without memory to approximate a simple zero of a nonlinear function. Error analysis demonstrates that the without-memory class has eighth-order convergence and is extendable to with-memory class. The extension of new family to the with-memory one is also presented which attains the convergence order 15.5156 and a very high efficiency index 15.51561/4≈1.9847. Some particular schemes of the with-memory family are also described. Numerical examples and some dynamical aspects of the new schemes are given to support theoretical results.


SPE Journal ◽  
2021 ◽  
pp. 1-17
Author(s):  
Yixuan Wang ◽  
Faruk Alpak ◽  
Guohua Gao ◽  
Chaohui Chen ◽  
Jeroen Vink ◽  
...  

Summary Although it is possible to apply traditional optimization algorithms to determine the Pareto front of a multiobjective optimization problem, the computational cost is extremely high when the objective function evaluation requires solving a complex reservoir simulation problem and optimization cannot benefit from adjoint-based gradients. This paper proposes a novel workflow to solve bi-objective optimization problems using the distributed quasi-Newton (DQN) method, which is a well-parallelized and derivative-free optimization (DFO) method. Numerical tests confirm that the DQN method performs efficiently and robustly. The efficiency of the DQN optimizer stems from a distributed computing mechanism that effectively shares the available information discovered in prior iterations. Rather than performing multiple quasi-Newton optimization tasks in isolation, simulation results are shared among distinct DQN optimization tasks or threads. In this paper, the DQN method is applied to the optimization of a weighted average of two objectives, using different weighting factors for different optimization threads. In each iteration, the DQN optimizer generates an ensemble of search points (or simulation cases) in parallel, and a set of nondominated points is updated accordingly. Different DQN optimization threads, which use the same set of simulation results but different weighting factors in their objective functions, converge to different optima of the weighted average objective function. The nondominated points found in the last iteration form a set of Pareto-optimal solutions. Robustness as well as efficiency of the DQN optimizer originates from reliance on a large, shared set of intermediate search points. On the one hand, this set of searching points is (much) smaller than the combined sets needed if all optimizations with different weighting factors would be executed separately; on the other hand, the size of this set produces a high fault tolerance, which means even if some simulations fail at a given iteration, the DQN method’s distributed-parallelinformation-sharing protocol is designed and implemented such that the optimization process can still proceed to the next iteration. The proposed DQN optimization method is first validated on synthetic examples with analytical objective functions. Then, it is tested on well-location optimization (WLO) problems by maximizing the oil production and minimizing the water production. Furthermore, the proposed method is benchmarked against a bi-objective implementation of the mesh adaptive direct search (MADS) method, and the numerical results reinforce the auspicious computational attributes of DQN observed for the test problems. To the best of our knowledge, this is the first time that a well-parallelized and derivative-free DQN optimization method has been developed and tested on bi-objective optimization problems. The methodology proposed can help improve efficiency and robustness in solving complicated bi-objective optimization problems by taking advantage of model-based search algorithms with an effective information-sharing mechanism. NOTE: This paper is published as part of the 2021 SPE Reservoir Simulation Conference Special Issue.


2012 ◽  
Vol 2012 ◽  
pp. 1-18 ◽  
Author(s):  
Rajni Sharma ◽  
Janak Raj Sharma

We derive a family of eighth-order multipoint methods for the solution of nonlinear equations. In terms of computational cost, the family requires evaluations of only three functions and one first derivative per iteration. This implies that the efficiency index of the present methods is 1.682. Kung and Traub (1974) conjectured that multipoint iteration methods without memory based on n evaluations have optimal order . Thus, the family agrees with Kung-Traub conjecture for the case . Computational results demonstrate that the developed methods are efficient and robust as compared with many well-known methods.


2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Rajinder Thukral

A new family of eighth-order derivative-free methods for solving nonlinear equations is presented. It is proved that these methods have the convergence order of eight. These new methods are derivative-free and only use four evaluations of the function per iteration. In fact, we have obtained the optimal order of convergence which supports the Kung and Traub conjecture. Kung and Traub conjectured that the multipoint iteration methods, without memory based onnevaluations could achieve optimal convergence order of . Thus, we present new derivative-free methods which agree with Kung and Traub conjecture for . Numerical comparisons are made to demonstrate the performance of the methods presented.


2011 ◽  
Vol 5 (1) ◽  
pp. 93-109 ◽  
Author(s):  
M. Heydari ◽  
S.M. Hosseini ◽  
G.B. Loghmani

In this paper, two new families of eighth-order iterative methods for solving nonlinear equations is presented. These methods are developed by combining a class of optimal two-point methods and a modified Newton?s method in the third step. Per iteration the presented methods require three evaluations of the function and one evaluation of its first derivative and therefore have the efficiency index equal to 1:682. Kung and Traub conjectured that a multipoint iteration without memory based on n evaluations could achieve optimal convergence order 2n?1. Thus the new families of eighth-order methods agrees with the conjecture of Kung-Traub for the case n = 4. Numerical comparisons are made with several other existing methods to show the performance of the presented methods.


2021 ◽  
Author(s):  
Yixuan Wang ◽  
Faruk Alpak ◽  
Guohua Gao ◽  
Chaohui Chen ◽  
Jeroen Vink ◽  
...  

Abstract Although it is possible to apply traditional optimization algorithms to determine the Pareto front of a multi-objective optimization problem, the computational cost is extremely high, when the objective function evaluation requires solving a complex reservoir simulation problem and optimization cannot benefit from adjoint-based gradients. This paper proposes a novel workflow to solve bi-objective optimization problems using the distributed quasi-Newton (DQN) method, which is a well-parallelized and derivative-free optimization (DFO) method. Numerical tests confirm that the DQN method performs efficiently and robustly. The efficiency of the DQN optimizer stems from a distributed computing mechanism which effectively shares the available information discovered in prior iterations. Rather than performing multiple quasi-Newton optimization tasks in isolation, simulation results are shared among distinct DQN optimization tasks or threads. In this paper, the DQN method is applied to the optimization of a weighted average of two objectives, using different weighting factors for different optimization threads. In each iteration, the DQN optimizer generates an ensemble of search points (or simulation cases) in parallel and a set of non-dominated points is updated accordingly. Different DQN optimization threads, which use the same set of simulation results but different weighting factors in their objective functions, converge to different optima of the weighted average objective function. The non-dominated points found in the last iteration form a set of Pareto optimal solutions. Robustness as well as efficiency of the DQN optimizer originates from reliance on a large, shared set of intermediate search points. On the one hand, this set of searching points is (much) smaller than the combined sets needed if all optimizations with different weighting factors would be executed separately; on the other hand, the size of this set produces a high fault tolerance. Even if some simulations fail at a given iteration, DQN’s distributed-parallel information-sharing protocol is designed and implemented such that the optimization process can still proceed to the next iteration. The proposed DQN optimization method is first validated on synthetic examples with analytical objective functions. Then, it is tested on well location optimization problems, by maximizing the oil production and minimizing the water production. Furthermore, the proposed method is benchmarked against a bi-objective implementation of the MADS (Mesh Adaptive Direct Search) method, and the numerical results reinforce the auspicious computational attributes of DQN observed for the test problems. To the best of our knowledge, this is the first time that a well-parallelized and derivative-free DQN optimization method has been developed and tested on bi-objective optimization problems. The methodology proposed can help improve efficiency and robustness in solving complicated bi-objective optimization problems by taking advantage of model-based search optimization algorithms with an effective information-sharing mechanism.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 448
Author(s):  
Bryan Coutts ◽  
Mark Girard ◽  
John Watrous

We identify necessary and sufficient conditions for a quantum channel to be optimal for any convex optimization problem in which the optimization is taken over the set of all quantum channels of a fixed size. Optimality conditions for convex optimization problems over the set of all quantum measurements of a given system having a fixed number of measurement outcomes are obtained as a special case. In the case of linear objective functions for measurement optimization problems, our conditions reduce to the well-known Holevo-Yuen-Kennedy-Lax measurement optimality conditions. We illustrate how our conditions can be applied to various state transformation problems having non-linear objective functions based on the fidelity, trace distance, and quantum relative entropy.


2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
R. Thukral

A new family of eighth-order derivative-free methods for solving nonlinear equations is presented. It is proved that these methods have the convergence order of eight. These new methods are derivative-free and only use four evaluations of the function per iteration. In fact, we have obtained the optimal order of convergence which supports the Kung and Traub conjecture. Kung and Traub conjectured that the multipoint iteration methods, without memory based on evaluations, could achieve optimal convergence order . Thus, we present new derivative-free methods which agree with Kung and Traub conjecture for . Numerical comparisons are made to demonstrate the performance of the methods presented.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
J. P. Jaiswal

It is attempted to present two derivative-free Steffensen-type methods with memory for solving nonlinear equations. By making use of a suitable self-accelerator parameter in the existing optimal fourth- and eighth-order without memory methods, the order of convergence has been increased without any extra function evaluation. Therefore, its efficiency index is also increased, which is the main contribution of this paper. The self-accelerator parameters are estimated using Newton’s interpolation. To show applicability of the proposed methods, some numerical illustrations are presented.


Sign in / Sign up

Export Citation Format

Share Document