scholarly journals A Cluster Truncated Pareto Distribution and Its Applications

2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Mei Ling Huang ◽  
Vincenzo Coia ◽  
Percy Brill

The Pareto distribution is a heavy-tailed distribution with many applications in the real world. The tail of the distribution is important, but the threshold of the distribution is difficult to determine in some situations. In this paper we consider two real-world examples with heavy-tailed observations, which leads us to propose a mixture truncated Pareto distribution (MTPD) and study its properties. We construct a cluster truncated Pareto distribution (CTPD) by using a two-point slope technique to estimate the MTPD from a random sample. We apply the MTPD and CTPD to the two examples and compare the proposed method with existing estimation methods. The results of log-log plots and goodness-of-fit tests show that the MTPD and the cluster estimation method produce very good fitting distributions with real-world data.

2013 ◽  
Vol 2013 ◽  
pp. 1-12
Author(s):  
Mei Ling Huang ◽  
Ke Zhao

We propose a weighted estimation method for risk models. Two examples of natural disasters are studied: hurricane loss in the USA and forest fire loss in Canada. Risk data is often fitted by a heavy-tailed distribution, for example, a Pareto distribution, which has many applications in economics, actuarial science, survival analysis, networks, and other stochastic models. There is a difficulty in the inference of the Pareto distribution which has infinite moments in the heavy-tailed case. Firstly this paper applies the truncated Pareto distribution to overcome this difficulty. Secondly, we propose a weighted semiparametric method to estimate the truncated Pareto distribution. The idea of the new method is to place less weight on the extreme data values. This paper gives an exact efficiency function, L1-optimal weights and L2-optimal weights of the new estimator. Monte Carlo simulation results confirm the theoretical conclusions. The two above mentioned examples are analyzed by using the proposed method. This paper shows that the new estimation method is more efficient by mean square error relative to several existing methods and fits risk data well.


2014 ◽  
Vol 9 (1) ◽  
pp. 51-61 ◽  
Author(s):  
Don Cyr ◽  
Joseph Kushner ◽  
Tomson Ogwang

AbstractIn this paper, we use three different goodness-of-fit tests for log-normality in conjunction with kernel nonparametric density estimation methods to examine both the size distribution of California North Coast wineries over time and by age. Our kernel density estimates indicate that the size distribution of wineries has changed from positively skewed to bimodal. These results are inconsistent with those in other industries, but are consistent with recent empirical research in the wine industry, which finds that smaller firms are comprising a larger component of market share. In terms of the distribution of firm size by age, our results indicate that as wineries age, the size distribution of firms becomes less skewed and more bimodal, which is also inconsistent with the research on other industries which finds that as firms age, the size distribution becomes more normal. Our results indicate that unlike other industries, where entry is very difficult, small firms can enter the wine industry and survive. (JEL Classifications: L11, L22, L25)


Statistics ◽  
2014 ◽  
Vol 49 (5) ◽  
pp. 1026-1041 ◽  
Author(s):  
Marko Obradović ◽  
Milan Jovanović ◽  
Bojana Milošević

2014 ◽  
Vol 11 (1) ◽  
Author(s):  
Felix Nwobi ◽  
Chukwudi Ugomma

In this paper we study the different methods for estimation of the parameters of the Weibull distribution. These methods are compared in terms of their fits using the mean square error (MSE) and the Kolmogorov-Smirnov (KS) criteria to select the best method. Goodness-of-fit tests show that the Weibull distribution is a good fit to the squared returns series of weekly stock prices of Cornerstone Insurance PLC. Results show that the mean rank (MR) is the best method among the methods in the graphical and analytical procedures. Numerical simulation studies carried out show that the maximum likelihood estimation method (MLE) significantly outperformed other methods.


2019 ◽  
Vol 1 (12) ◽  
Author(s):  
Mahmood Ul Hassan ◽  
Omar Hayat ◽  
Zahra Noreen

AbstractAt-site flood frequency analysis is a direct method of estimation of flood frequency at a particular site. The appropriate selection of probability distribution and a parameter estimation method are important for at-site flood frequency analysis. Generalized extreme value, three-parameter log-normal, generalized logistic, Pearson type-III and Gumbel distributions have been considered to describe the annual maximum steam flow at five gauging sites of Torne River in Sweden. To estimate the parameters of distributions, maximum likelihood estimation and L-moments methods are used. The performance of these distributions is assessed based on goodness-of-fit tests and accuracy measures. At most sites, the best-fitted distributions are with LM estimation method. Finally, the most suitable distribution at each site is used to predict the maximum flood magnitude for different return periods.


2004 ◽  
Vol 31 (5) ◽  
pp. 892-897 ◽  
Author(s):  
Mario Lefebvre

This paper examines models for the errors in forecasts of river and (or) watershed flows produced by the PREVIS forecasting system, which is used by Alcan, among other companies. We analyzed the following statistical models: generalized Pareto, Laplace, and Gaussian distributions, depending on the flow value forecasted by PREVIS. These models enable us to quantify the precision of the forecasts produced by PREVIS, as well as the risk of seeing the flow exceed a certain critical threshold, given the forecasted flow.Key words: modeling, Laplace distribution, Pareto distribution, goodness-of-fit tests, critical threshold.


Sign in / Sign up

Export Citation Format

Share Document