scholarly journals Estimation of Beta-Pareto Distribution Based on Several Optimization Methods

Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1055
Author(s):  
Badreddine Boumaraf ◽  
Nacira Seddik-Ameur ◽  
Vlad Stefan Barbu

This paper is concerned with the maximum likelihood estimators of the Beta-Pareto distribution introduced in Akinsete et al. (2008), which comes from the mixing of two probability distributions, Beta and Pareto. Since these estimators cannot be obtained explicitly, we use nonlinear optimization methods that numerically provide these estimators. The methods we investigate are the method of Newton-Raphson, the gradient method and the conjugate gradient method. Note that for the conjugate gradient method we use the model of Fletcher-Reeves. The corresponding algorithms are developed and the performances of the methods used are confirmed by an important simulation study. In order to compare between several concurrent models, namely generalized Beta-Pareto, Beta, Pareto, Gamma and Beta-Pareto, model criteria selection are used. We firstly consider completely observed data and, secondly, the observations are assumed to be right censored and we derive the same type of results.


2012 ◽  
Vol 11 (02) ◽  
pp. 165-172
Author(s):  
JINGXIN NA ◽  
WEI CHEN ◽  
HAIPENG LIU

For the one-step inverse method, an iteration method based on a quasi-conjugate-gradient method is proposed to replace the Newton–Raphson method. It commences from the physical meaning of elemental unbalance force. It does not need to solve the system of finite element equations. It not only inherits the advantage of conjugate gradient method but also avoids non-convergence of the solving process. Finally, the validity of the algorithm proposed is proved by comparing the simulation results obtained by the method in this paper with those obtained through the module of one-step inverse method in Dynaform and practical drawn parts.



Author(s):  
Chenna Nasreddine ◽  
Sellami Badreddine ◽  
Belloufi Mohammed

In this paper, we present a new hybrid method to solve a nonlinear unconstrained optimization problem by using conjugate gradient, which is a convex combination of Liu–Storey (LS) conjugate gradient method and Hager–Zhang (HZ) conjugate gradient method. This method possesses the sufficient descent property with Strong Wolfe line search and the global convergence with the strong Wolfe line search. In the end of this paper, we illustrate our method by giving some numerical examples.



2018 ◽  
Vol 39 (1) ◽  
pp. 426-450 ◽  
Author(s):  
Siegfried Cools ◽  
Emrullah Fatih Yetkin ◽  
Emmanuel Agullo ◽  
Luc Giraud ◽  
Wim Vanroose




Sign in / Sign up

Export Citation Format

Share Document