scholarly journals Classical and Bayesian Estimation of the Inverse Weibull Distribution: Using Progressive Type-I Censoring Scheme

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ali Algarni ◽  
Mohammed Elgarhy ◽  
Abdullah M Almarashi ◽  
Aisha Fayomi ◽  
Ahmed R El-Saeed

The challenge of estimating the parameters for the inverse Weibull (IW) distribution employing progressive censoring Type-I (PCTI) will be addressed in this study using Bayesian and non-Bayesian procedures. To address the issue of censoring time selection, qauntiles from the IW lifetime distribution will be implemented as censoring time points for PCTI. Focusing on the censoring schemes, maximum likelihood estimators (MLEs) and asymptotic confidence intervals (ACI) for unknown parameters are constructed. Under the squared error (SEr) loss function, Bayes estimates (BEs) and concomitant maximum posterior density credible interval estimations are also produced. The BEs are assessed using two methods: Lindley’s approximation (LiA) technique and the Metropolis-Hasting (MH) algorithm utilizing Markov Chain Monte Carlo (MCMC). The theoretical implications of MLEs and BEs for specified schemes of PCTI samples are shown via a simulation study to compare the performance of the different suggested estimators. Finally, application of two real data sets will be employed.

2018 ◽  
Vol 41 (2) ◽  
pp. 251-267 ◽  
Author(s):  
Abbas Pak ◽  
Arjun Kumar Gupta ◽  
Nayereh Bagheri Khoolenjani

In this paper  we study the reliability of a multicomponent stress-strength model assuming that the components follow power Lindley model.  The maximum likelihood estimate of the reliability parameter and its asymptotic confidence interval are obtained. Applying the parametric Bootstrap technique, interval estimation of the reliability is presented.  Also, the Bayes estimate and highest posterior density credible interval of the reliability parameter are derived using suitable priors on the parameters. Because there is no closed form for the Bayes estimate, we use the Markov Chain Monte Carlo method to obtain approximate Bayes  estimate of the reliability. To evaluate the performances of different procedures,  simulation studies are conducted and an example of real data sets is provided.


2022 ◽  
Vol 19 (3) ◽  
pp. 2330-2354
Author(s):  
M. Nagy ◽  
◽  
Adel Fahad Alrasheedi

<abstract><p>In this study, we estimate the unknown parameters, reliability, and hazard functions using a generalized Type-I progressive hybrid censoring sample from a Weibull distribution. Maximum likelihood (ML) and Bayesian estimates are calculated using a choice of prior distributions and loss functions, including squared error, general entropy, and LINEX. Unobserved failure point and interval Bayesian predictions, as well as a future progressive censored sample, are also developed. Finally, we run some simulation tests for the Bayesian approach and numerical example on real data sets using the MCMC algorithm.</p></abstract>


2020 ◽  
Vol 70 (4) ◽  
pp. 953-978
Author(s):  
Mustafa Ç. Korkmaz ◽  
G. G. Hamedani

AbstractThis paper proposes a new extended Lindley distribution, which has a more flexible density and hazard rate shapes than the Lindley and Power Lindley distributions, based on the mixture distribution structure in order to model with new distribution characteristics real data phenomena. Its some distributional properties such as the shapes, moments, quantile function, Bonferonni and Lorenz curves, mean deviations and order statistics have been obtained. Characterizations based on two truncated moments, conditional expectation as well as in terms of the hazard function are presented. Different estimation procedures have been employed to estimate the unknown parameters and their performances are compared via Monte Carlo simulations. The flexibility and importance of the proposed model are illustrated by two real data sets.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 934
Author(s):  
Yuxuan Zhang ◽  
Kaiwei Liu ◽  
Wenhao Gui

For the purpose of improving the statistical efficiency of estimators in life-testing experiments, generalized Type-I hybrid censoring has lately been implemented by guaranteeing that experiments only terminate after a certain number of failures appear. With the wide applications of bathtub-shaped distribution in engineering areas and the recently introduced generalized Type-I hybrid censoring scheme, considering that there is no work coalescing this certain type of censoring model with a bathtub-shaped distribution, we consider the parameter inference under generalized Type-I hybrid censoring. First, estimations of the unknown scale parameter and the reliability function are obtained under the Bayesian method based on LINEX and squared error loss functions with a conjugate gamma prior. The comparison of estimations under the E-Bayesian method for different prior distributions and loss functions is analyzed. Additionally, Bayesian and E-Bayesian estimations with two unknown parameters are introduced. Furthermore, to verify the robustness of the estimations above, the Monte Carlo method is introduced for the simulation study. Finally, the application of the discussed inference in practice is illustrated by analyzing a real data set.


2021 ◽  
Vol 20 ◽  
pp. 288-299
Author(s):  
Refah Mohammed Alotaibi ◽  
Yogesh Mani Tripathi ◽  
Sanku Dey ◽  
Hoda Ragab Rezk

In this paper, inference upon stress-strength reliability is considered for unit-Weibull distributions with a common parameter under the assumption that data are observed using progressive type II censoring. We obtain di_erent estimators of system reliability using classical and Bayesian procedures. Asymptotic interval is constructed based on Fisher information matrix. Besides, boot-p and boot-t intervals are also obtained. We evaluate Bayes estimates using Lindley's technique and Metropolis-Hastings (MH) algorithm. The Bayes credible interval is evaluated using MH method. An unbiased estimator of this parametric function is also obtained under know common parameter case. Numerical simulations are performed to compare estimation methods. Finally, a data set is studied for illustration purposes.


2018 ◽  
Vol 20 (6) ◽  
pp. 2055-2065 ◽  
Author(s):  
Johannes Brägelmann ◽  
Justo Lorenzo Bermejo

Abstract Technological advances and reduced costs of high-density methylation arrays have led to an increasing number of association studies on the possible relationship between human disease and epigenetic variability. DNA samples from peripheral blood or other tissue types are analyzed in epigenome-wide association studies (EWAS) to detect methylation differences related to a particular phenotype. Since information on the cell-type composition of the sample is generally not available and methylation profiles are cell-type specific, statistical methods have been developed for adjustment of cell-type heterogeneity in EWAS. In this study we systematically compared five popular adjustment methods: the factored spectrally transformed linear mixed model (FaST-LMM-EWASher), the sparse principal component analysis algorithm ReFACTor, surrogate variable analysis (SVA), independent SVA (ISVA) and an optimized version of SVA (SmartSVA). We used real data and applied a multilayered simulation framework to assess the type I error rate, the statistical power and the quality of estimated methylation differences according to major study characteristics. While all five adjustment methods improved false-positive rates compared with unadjusted analyses, FaST-LMM-EWASher resulted in the lowest type I error rate at the expense of low statistical power. SVA efficiently corrected for cell-type heterogeneity in EWAS up to 200 cases and 200 controls, but did not control type I error rates in larger studies. Results based on real data sets confirmed simulation findings with the strongest control of type I error rates by FaST-LMM-EWASher and SmartSVA. Overall, ReFACTor, ISVA and SmartSVA showed the best comparable statistical power, quality of estimated methylation differences and runtime.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Tahani A. Abushal ◽  
A. A. Soliman ◽  
G. A. Abd-Elmougod

The problem of statistical inference under joint censoring samples has received considerable attention in the past few years. In this paper, we adopted this problem when units under the test fail with different causes of failure which is known by the competing risks model. The model is formulated under consideration that only two independent causes of failure and the unit are collected from two lines of production and its life distributed with Burr XII lifetime distribution. So, under Type-I joint competing risks samples, we obtained the maximum likelihood (ML) and Bayes estimators. Interval estimation is discussed through asymptotic confidence interval, bootstrap confidence intervals, and Bayes credible interval. The numerical computations which described the quality of theoretical results are discussed in the forms of real data analyzed and Monte Carlo simulation study. Finally, numerical results are discussed and listed through some points as a brief comment.


2021 ◽  
Vol 9 (4) ◽  
pp. 789-808
Author(s):  
Amal Helu ◽  
Hani Samawi

In this article, we consider statistical inferences about the unknown parameters of the Lomax distribution basedon the Adaptive Type-II Progressive Hybrid censoring scheme, this scheme can save both the total test time and the cost induced by the failure of the units and increases the efficiency of statistical analysis. The estimation of the parameters is derived using the maximum likelihood (MLE) and the Bayesian procedures. The Bayesian estimators are obtained based on the symmetric and asymmetric loss functions. There are no explicit forms for the Bayesian estimators, therefore, we propose Lindley’s approximation method to compute the Bayesian estimators. A comparison between these estimators is provided by using extensive simulation. A real-life data example is provided to illustrate our proposed estimators.


2021 ◽  
Vol 10 (1) ◽  
pp. 4-22
Author(s):  
Gyan Prakash

Our main focus on combining two different approaches, Step-Stress Partially Accelerated Life Test and Type-I Progressive Hybrid censoring criteria in the present article. The fruitfulness of this combination has been investigated by bound lengths for unknown parameters of the Burr Type-XII distribution. Approximate confidence intervals, Bootstrap confidence intervals and One-Sample Bayes prediction bound lengths have been obtained under the above scenario. Particular cases of Type-I Progressive Hybrid censoring (Type-I and Progressive Type-II censoring) has also evaluated under SS-PALT. Optimal stress change time also measured by minimizing the asymptotic variance of ML Estimation. A simulation study based on Metropolis-Hastings algorithm have carried out along with a real data set example.


2016 ◽  
Vol 28 (8) ◽  
pp. 1694-1722 ◽  
Author(s):  
Yu Wang ◽  
Jihong Li

In typical machine learning applications such as information retrieval, precision and recall are two commonly used measures for assessing an algorithm's performance. Symmetrical confidence intervals based on K-fold cross-validated t distributions are widely used for the inference of precision and recall measures. As we confirmed through simulated experiments, however, these confidence intervals often exhibit lower degrees of confidence, which may easily lead to liberal inference results. Thus, it is crucial to construct faithful confidence (credible) intervals for precision and recall with a high degree of confidence and a short interval length. In this study, we propose two posterior credible intervals for precision and recall based on K-fold cross-validated beta distributions. The first credible interval for precision (or recall) is constructed based on the beta posterior distribution inferred by all K data sets corresponding to K confusion matrices from a K-fold cross-validation. Second, considering that each data set corresponding to a confusion matrix from a K-fold cross-validation can be used to infer a beta posterior distribution of precision (or recall), the second proposed credible interval for precision (or recall) is constructed based on the average of K beta posterior distributions. Experimental results on simulated and real data sets demonstrate that the first credible interval proposed in this study almost always resulted in degrees of confidence greater than 95%. With an acceptable degree of confidence, both of our two proposed credible intervals have shorter interval lengths than those based on a corrected K-fold cross-validated t distribution. Meanwhile, the average ranks of these two credible intervals are superior to that of the confidence interval based on a K-fold cross-validated t distribution for the degree of confidence and are superior to that of the confidence interval based on a corrected K-fold cross-validated t distribution for the interval length in all 27 cases of simulated and real data experiments. However, the confidence intervals based on the K-fold and corrected K-fold cross-validated t distributions are in the two extremes. Thus, when focusing on the reliability of the inference for precision and recall, the proposed methods are preferable, especially for the first credible interval.


Sign in / Sign up

Export Citation Format

Share Document