ON THE PROPERTIES AND MLEs OF GENERALIZED ODD GENERALIZED EXPONENTIAL- EXPONENTIAL DISTRIBUTION

2021 ◽  
Vol 4 (4) ◽  
pp. 155-165
Author(s):  
Aminu Suleiman Mohammed ◽  
Badamasi Abba ◽  
Abubakar G. Musa

For proper actualization of the phenomenon contained in some lifetime data sets, a generalization, extension or modification of classical distributions is required. In this paper, we introduce a new generalization of exponential distribution, called the generalized odd generalized exponential-exponential distribution. The proposed distribution can model lifetime data with different failure rates, including the increasing, decreasing, unimodal, bathtub, and decreasing-increasing-decreasing failure rates. Various properties of the model such as quantile function, moment, mean deviations, Renyi entropy, and order statistics.  We provide an approximation for the values of the mean, variance, skewness, kurtosis, and mean deviations using Monte Carlo simulation experiments. Estimating of the distribution parameters is performed using the maximum likelihood method, and Monte Carlo simulation experiments is used to assess the estimation method. The method of maximum likelihood is shown to provide a promising parameter estimates, and hence can be adopted in practice for estimating the parameters of the distribution. An application to real and simulated datasets indicated that the new model is superior to the fits than the other compared distributions

1989 ◽  
Vol 26 (2) ◽  
pp. 214-221 ◽  
Author(s):  
Subhash Sharma ◽  
Srinivas Durvasula ◽  
William R. Dillon

The authors report some results on the behavior of alternative covariance structure estimation procedures in the presence of non-normal data. They conducted Monté Carlo simulation experiments with a factorial design involving three levels of skewness, three level of kurtosis, and three different sample sizes. For normal data, among all the elliptical estimation techniques, elliptical reweighted least squares (ERLS) was equivalent in performance to ML. However, as expected, for non-normal data parameter estimates were unbiased for ML and the elliptical estimation techniques, whereas the bias in standard errors was substantial for GLS and ML. Among elliptical estimation techniques, ERLS was superior in performance. On the basis of the simulation results, the authors recommend that researchers use ERLS for both normal and non-normal data.


Algorithms ◽  
2019 ◽  
Vol 12 (12) ◽  
pp. 246
Author(s):  
G. Srinivasa Rao ◽  
Fiaz Ahmad Bhatti ◽  
Muhammad Aslam ◽  
Mohammed Albassam

A multicomponent system of k components with independent and identically distributed random strengths X 1 , X 2 , … X k , with each component undergoing random stress, is in working condition if and only if at least s out of k strengths exceed the subjected stress. Reliability is measured while strength and stress are obtained through a process following an exponentiated moment-based exponential distribution with different shape parameters. Reliability is gauged from the samples using maximum likelihood (ML) on the computed distributions of strength and stress. Asymptotic estimates of reliability are compared using Monte Carlo simulation. Application to forest data and to breaking strengths of jute fiber shows the usefulness of the model.


2015 ◽  
Vol 19 (2) ◽  
pp. 145-155 ◽  
Author(s):  
Heri Retnawati

Studi ini bertujuan untuk membandingkan ketepatan estimasi kemampuan laten (latent trait) pada model logistik dengan metode maksimum likelihood (ML) gabungan dan bayes. Studi ini menggunakan metode simulasi Monte Carlo, dengan model data ujian nasional matematika SMP. Variabel simulasi adalah panjang tes dan banyaknya peserta.  Data dibangkitkan dengan menggunakan SAS/IML dengan replikasi 40 kali, dan tiap data diestimasi dengan ML dan Bayes. Hasil estimasi kemudian dibandingkan dengan kemampuan yang sebenarnya, dengan menghitung mean square of error (MSE) dan korelasi antara kemampuan laten yang sebenarnya dan hasil estimasi. Metode yang memiliki MSE lebih kecil dikatakan sebagai metode estimasi yang lebih baik. Hasil studi menunjukkan bahwa pada estimasi kemampuan laten dengan 15, 20, 25, dan 30 butir dengan 500 dan 1.000 peserta, hasil MSE belum stabil, namun ketika peserta menjadi 1.500 orang, diperoleh akurasi estimasi kemampuan yang hampir sama baik estimasi antara metode ML dan metode Bayes. Pada estimasi dengan 15 dan 20 butir dan peserta 500, 1.000, dan 1.500, hasil MSE belum stabil, dan ketika estimasi melibatkan 25 dan 30 butir, baik dengan peserta 500, 1.000, maupun 1.500 akan diperoleh hasil yang lebih akurat dengan metode ML. Kata kunci: estimasi kemampuan, metode maksimum likelihood, metode Bayes     THE COMPARISON OF ESTIMATION OF LATENT TRAITS USING MAXIMUM LIKELIHOOD AND BAYES METHODS Abstract This study aimed to compare the accuracy of the estimation of latent ability (latent trait) in the logistic model using maximum likelihood (ML) and Bayes methods. This study uses a quantitative approach that is the Monte Carlo simulation method using students responses to national examination as data model, and variables are the length of the test and the number of participants. The data were generated using SAS/IML with replication 40 times, and each datum is then estimated by ML and Bayes. The estimation results are then compared with the true abilities, by calculating the mean square of error (MSE) and correlation between the true ability and the results of estimation. The smaller MSE estimation method is said to be better. The study shows that on the estimates with 15, 20, 25, and 30 items with 500 and 1,000 participants, the results have not been stable, but when participants were upto 1,500 people, it was obtained accuracy estimation capabilities similar to the ML and Bayesian methods, and with 15 items and participants of 500, 1,000, and 1,500, the result has not been stable, while using 20 items, the results have not been stable, and when estimates involve 25 and 30 items, either by participants 500, 1,000, and 1,500 it will obtain more accurate results with ML method. Keywords: estimation ability, maximum likelihood method, bayes method


2020 ◽  
pp. 845-853 ◽  
Author(s):  
Bsma Abdul Hameed ◽  
Abbas N. Salman ◽  
Bayda Atiya Kalaf

This paper deals with the estimation of the stress strength reliability for a component which has a strength that is independent on opposite lower and upper bound stresses, when the stresses and strength follow Inverse Kumaraswamy Distribution. D estimation approaches were applied, namely the maximum likelihood, moment, and shrinkage methods. Monte Carlo simulation experiments were performed to compare the estimation methods based on the mean squared error criteria.


2021 ◽  
Author(s):  
Mehmet Niyazi Cankaya ◽  
Roberto Vila

Abstract The maximum logq likelihood estimation method is a generalization of the known maximum log likelihood method to overcome the problem for modeling non-identical observations ( inliers and outliers). The parameter $q$ is a tuning constant to manage the modeling capability. Weibull is a flexible and popular distribution for problems in engineering. In this study, this method is used to estimate the parameters of Weibull distribution when non-identical observations exist. Since the main idea is based on modeling capability of objective function p(x; ʘ) = logq [f(x; ʘ)], we observe that the finiteness of score functions cannot play a role in the robust estimation for inliers . The properties of Weibull distribution are examined. In the numerical experiment, the parameters of Weibull distribution are estimated by logq and its special form, log , likelihood methods if the different designs of contamination into underlying Weibull distribution are applied. The optimization is performed via genetic algorithm. The modeling competence of p(x; ʘ) and insensitiveness to non-identical observations are observed by Monte Carlo simulation. The value of $q$ can be chosen by use of the mean squared error in simulation and the $p$ -value of Kolmogorov - Smirnov test statistic used for evaluation of fitting competence. Thus, we can overcome the problem about determining of the value of $q$ for real data sets.


2021 ◽  
Vol 9 (3) ◽  
pp. 555-586
Author(s):  
Hanaa Elgohari ◽  
Mohamed Ibrahim ◽  
Haitham Yousof

In this paper, a new generalization of the Pareto type II model is introduced and studied. The new density canbe “right skewed” with heavy tail shape and its corresponding failure rate can be “J-shape”, “decreasing” and “upside down (or increasing-constant-decreasing)”. The new model may be used as an “under-dispersed” and “over-dispersed” model. Bayesian and non-Bayesian estimation methods are considered. We assessed the performance of all methods via simulation study. Bayesian and non-Bayesian estimation methods are compared in modeling real data via two applications. In modeling real data, the maximum likelihood method is the best estimation method. So, we used it in comparing competitive models. Before using the the maximum likelihood method, we performed simulation experiments to assess the finite sample behavior of it using the biases and mean squared errors.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Hisham M. Almongy ◽  
Ehab M. Almetwally ◽  
Randa Alharbi ◽  
Dalia Alnagar ◽  
E. H. Hafez ◽  
...  

This paper is concerned with the estimation of the Weibull generalized exponential distribution (WGED) parameters based on the adaptive Type-II progressive (ATIIP) censored sample. Maximum likelihood estimation (MLE), maximum product spacing (MPS), and Bayesian estimation based on Markov chain Monte Carlo (MCMC) methods have been determined to find the best estimation method. The Monte Carlo simulation is used to compare the three methods of estimation based on the ATIIP-censored sample, and also, we made a bootstrap confidence interval estimation. We will analyze data related to the distribution about single carbon fiber and electrical data as real data cases to show how the schemes work in practice.


Author(s):  
Hassan Tawakol A. Fadol

The purpose of this paper was to identify the values of the parameters of the shape of the binomial, bias one and natural distributions. Using the estimation method and maximum likelihood Method, the criterion of differentiation was used to estimate the shape parameter between the probability distributions and to arrive at the best estimate of the parameter of the shape when the sample sizes are small, medium, The problem was to find the best estimate of the characteristics of the society to be estimated so that they are close to the estimated average of the mean error squares and also the effect of the estimation method on estimating the shape parameter of the distributions at the sizes of different samples In the values of the different shape parameter, the descriptive and inductive method was selected in the analysis of the data by generating 1000 random numbers of different sizes using the simulation method through the MATLAB program. A number of results were reached, 10) to estimate the small shape parameter (0.3) for binomial distributions and Poisson and natural and they can use the Poisson distribution because it is the best among the distributions, and to estimate the parameter of figure (0.5), (0.7), (0.9) Because it is better for binomial binomial distributions, when the size of a sample (70) for a teacher estimate The small figure (0.3) of the binomial and boson distributions and natural distributions can be used for normal distribution because it is the best among the distributions.


2016 ◽  
Author(s):  
Rui J. Costa ◽  
Hilde Wilkinson-Herbots

AbstractThe isolation-with-migration (IM) model is commonly used to make inferences about gene flow during speciation, using polymorphism data. However, Becquet and Przeworski (2009) report that the parameter estimates obtained by fitting the IM model are very sensitive to the model's assumptions (including the assumption of constant gene flow until the present). This paper is concerned with the isolation-with-initial-migration (IIM) model of Wilkinson-Herbots (2012), which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a very fast method of fitting an extended version of the IIM model, which also allows for asymmetric gene flow and unequal population sizes. This is a maximum-likelihood method, applicable to data on the number of segregating sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used to distinguish between alternative models representing different evolutionary scenarios, by means of likelihood ratio tests. We illustrate the procedure on pairs of Drosophila sequences from approximately 30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The R code to fit the IIM model can be found in the supplementary files of this paper.


2006 ◽  
Vol 3 (4) ◽  
pp. 1603-1627 ◽  
Author(s):  
W. Wang ◽  
P. H. A. J. M. van Gelder ◽  
J. K. Vrijling ◽  
X. Chen

Abstract. The Lo's R/S tests (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the maximum likelihood estimation method implemented in S-Plus (S-MLE) are evaluated through intensive Mote Carlo simulations for detecting the existence of long-memory. It is shown that, it is difficult to find an appropriate lag q for Lo's test for different AR and ARFIMA processes, which makes the use of Lo's test very tricky. In general, the GPH test outperforms the Lo's test, but for cases where there is strong autocorrelations (e.g., AR(1) processes with φ=0.97 or even 0.99), the GPH test is totally useless, even for time series of large data size. Although S-MLE method does not provide a statistic test for the existence of long-memory, the estimates of d given by S-MLE seems to give a good indication of whether or not the long-memory is present. Data size has a significant impact on the power of all the three methods. Generally, the power of Lo's test and GPH test increases with the increase of data size, and the estimates of d with GPH test and S-MLE converge with the increase of data size. According to the results with the Lo's R/S test (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the S-MLE method, all daily flow series exhibit long-memory. The intensity of long-memory in daily streamflow processes has only a very weak positive relationship with the scale of watershed.


Sign in / Sign up

Export Citation Format

Share Document