scholarly journals On Reliability in a Multicomponent Stress-Strength Model with Power Lindley Distribution

2018 ◽  
Vol 41 (2) ◽  
pp. 251-267 ◽  
Author(s):  
Abbas Pak ◽  
Arjun Kumar Gupta ◽  
Nayereh Bagheri Khoolenjani

In this paper  we study the reliability of a multicomponent stress-strength model assuming that the components follow power Lindley model.  The maximum likelihood estimate of the reliability parameter and its asymptotic confidence interval are obtained. Applying the parametric Bootstrap technique, interval estimation of the reliability is presented.  Also, the Bayes estimate and highest posterior density credible interval of the reliability parameter are derived using suitable priors on the parameters. Because there is no closed form for the Bayes estimate, we use the Markov Chain Monte Carlo method to obtain approximate Bayes  estimate of the reliability. To evaluate the performances of different procedures,  simulation studies are conducted and an example of real data sets is provided.

Author(s):  
M. M. E. Abd El-Monsef ◽  
Ghareeb A. Marei ◽  
N. M. Kilany

This paper aims to estimate the stress-strength reliability parameter  when  and  are follow the weighted Lomax (WL) distribution. The behavior of stress-strength parameters and reliability have been studied by using maximum likelihood and Bayesian estimators through the Monte Carlo simulation study which carried out showing satisfactory performance of the estimators obtained. Finally, two real data sets representing waiting times before service of the customers of two banks A and B are fitted using the WL distribution and used to estimate the stress-strength parameters and reliability function.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 937 ◽  
Author(s):  
Ying Xie ◽  
Wenhao Gui

Estimating the accurate evaluation of product lifetime performance has always been a hot topic in manufacturing industry. This paper, based on the lifetime performance index, focuses on its evaluation when a lower specification limit is given. The progressive first-failure-censored data we discuss have a common log-logistic distribution. Both Bayesian and non-Bayesian method are studied. Bayes estimator of the parameters of the log-logistic distribution and the lifetime performance index are obtained using both the Lindley approximation and Monte Carlo Markov Chain methods under symmetric and asymmetric loss functions. As for interval estimation, we apply the maximum likelihood estimator to construct the asymptotic confidence intervals and the Metropolis–Hastings algorithm to establish the highest posterior density credible intervals. Moreover, we analyze a real data set for demonstrative purposes. In addition, different criteria for deciding the optimal censoring scheme have been studied.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ali Algarni ◽  
Mohammed Elgarhy ◽  
Abdullah M Almarashi ◽  
Aisha Fayomi ◽  
Ahmed R El-Saeed

The challenge of estimating the parameters for the inverse Weibull (IW) distribution employing progressive censoring Type-I (PCTI) will be addressed in this study using Bayesian and non-Bayesian procedures. To address the issue of censoring time selection, qauntiles from the IW lifetime distribution will be implemented as censoring time points for PCTI. Focusing on the censoring schemes, maximum likelihood estimators (MLEs) and asymptotic confidence intervals (ACI) for unknown parameters are constructed. Under the squared error (SEr) loss function, Bayes estimates (BEs) and concomitant maximum posterior density credible interval estimations are also produced. The BEs are assessed using two methods: Lindley’s approximation (LiA) technique and the Metropolis-Hasting (MH) algorithm utilizing Markov Chain Monte Carlo (MCMC). The theoretical implications of MLEs and BEs for specified schemes of PCTI samples are shown via a simulation study to compare the performance of the different suggested estimators. Finally, application of two real data sets will be employed.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


Genetics ◽  
2000 ◽  
Vol 154 (1) ◽  
pp. 381-395
Author(s):  
Pavel Morozov ◽  
Tatyana Sitnikova ◽  
Gary Churchill ◽  
Francisco José Ayala ◽  
Andrey Rzhetsky

Abstract We propose models for describing replacement rate variation in genes and proteins, in which the profile of relative replacement rates along the length of a given sequence is defined as a function of the site number. We consider here two types of functions, one derived from the cosine Fourier series, and the other from discrete wavelet transforms. The number of parameters used for characterizing the substitution rates along the sequences can be flexibly changed and in their most parameter-rich versions, both Fourier and wavelet models become equivalent to the unrestricted-rates model, in which each site of a sequence alignment evolves at a unique rate. When applied to a few real data sets, the new models appeared to fit data better than the discrete gamma model when compared with the Akaike information criterion and the likelihood-ratio test, although the parametric bootstrap version of the Cox test performed for one of the data sets indicated that the difference in likelihoods between the two models is not significant. The new models are applicable to testing biological hypotheses such as the statistical identity of rate variation profiles among homologous protein families. These models are also useful for determining regions in genes and proteins that evolve significantly faster or slower than the sequence average. We illustrate the application of the new method by analyzing human immunoglobulin and Drosophilid alcohol dehydrogenase sequences.


Author(s):  
Hiba Zeyada Muhammed ◽  
Essam Abd Elsalam Muhammed

In this paper, Bayesian and non-Bayesian estimation of the inverted Topp-Leone distribution shape parameter are studied when the sample is complete and random censored. The maximum likelihood estimator (MLE) and Bayes estimator of the unknown parameter are proposed. The Bayes estimates (BEs) have been computed based on the squared error loss (SEL) function and using Markov Chain Monte Carlo (MCMC) techniques. The asymptotic, bootstrap (p,t), and highest posterior density intervals are computed. The Metropolis Hasting algorithm is proposed for Bayes estimates. Monte Carlo simulation is performed to compare the performances of the proposed methods and one real data set has been analyzed for illustrative purposes.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Tahani A. Abushal ◽  
A. A. Soliman ◽  
G. A. Abd-Elmougod

The problem of statistical inference under joint censoring samples has received considerable attention in the past few years. In this paper, we adopted this problem when units under the test fail with different causes of failure which is known by the competing risks model. The model is formulated under consideration that only two independent causes of failure and the unit are collected from two lines of production and its life distributed with Burr XII lifetime distribution. So, under Type-I joint competing risks samples, we obtained the maximum likelihood (ML) and Bayes estimators. Interval estimation is discussed through asymptotic confidence interval, bootstrap confidence intervals, and Bayes credible interval. The numerical computations which described the quality of theoretical results are discussed in the forms of real data analyzed and Monte Carlo simulation study. Finally, numerical results are discussed and listed through some points as a brief comment.


2021 ◽  
Vol 6 (10) ◽  
pp. 10789-10801
Author(s):  
Tahani A. Abushal ◽  

<abstract><p>In this paper, the problem of estimating the parameter of Akash distribution applied when the lifetime of the product follow Type-Ⅱ censoring. The maximum likelihood estimators (MLE) are studied for estimating the unknown parameter and reliability characteristics. Approximate confidence interval for the parameter is derived under the s-normal approach to the asymptotic distribution of MLE. The Bayesian inference procedures have been developed under the usual error loss function through Lindley's technique and Metropolis-Hastings algorithm. The highest posterior density interval is developed by using Metropolis-Hastings algorithm. Finally, the performances of the different methods have been compared through a Monte Carlo simulation study. The application to set of real data is also analyzed using proposed methods.</p></abstract>


2016 ◽  
Vol 28 (8) ◽  
pp. 1694-1722 ◽  
Author(s):  
Yu Wang ◽  
Jihong Li

In typical machine learning applications such as information retrieval, precision and recall are two commonly used measures for assessing an algorithm's performance. Symmetrical confidence intervals based on K-fold cross-validated t distributions are widely used for the inference of precision and recall measures. As we confirmed through simulated experiments, however, these confidence intervals often exhibit lower degrees of confidence, which may easily lead to liberal inference results. Thus, it is crucial to construct faithful confidence (credible) intervals for precision and recall with a high degree of confidence and a short interval length. In this study, we propose two posterior credible intervals for precision and recall based on K-fold cross-validated beta distributions. The first credible interval for precision (or recall) is constructed based on the beta posterior distribution inferred by all K data sets corresponding to K confusion matrices from a K-fold cross-validation. Second, considering that each data set corresponding to a confusion matrix from a K-fold cross-validation can be used to infer a beta posterior distribution of precision (or recall), the second proposed credible interval for precision (or recall) is constructed based on the average of K beta posterior distributions. Experimental results on simulated and real data sets demonstrate that the first credible interval proposed in this study almost always resulted in degrees of confidence greater than 95%. With an acceptable degree of confidence, both of our two proposed credible intervals have shorter interval lengths than those based on a corrected K-fold cross-validated t distribution. Meanwhile, the average ranks of these two credible intervals are superior to that of the confidence interval based on a K-fold cross-validated t distribution for the degree of confidence and are superior to that of the confidence interval based on a corrected K-fold cross-validated t distribution for the interval length in all 27 cases of simulated and real data experiments. However, the confidence intervals based on the K-fold and corrected K-fold cross-validated t distributions are in the two extremes. Thus, when focusing on the reliability of the inference for precision and recall, the proposed methods are preferable, especially for the first credible interval.


2019 ◽  
Vol 69 (2) ◽  
pp. 209-220 ◽  
Author(s):  
Mathieu Fourment ◽  
Andrew F Magee ◽  
Chris Whidden ◽  
Arman Bilge ◽  
Frederick A Matsen ◽  
...  

Abstract The marginal likelihood of a model is a key quantity for assessing the evidence provided by the data in support of a model. The marginal likelihood is the normalizing constant for the posterior density, obtained by integrating the product of the likelihood and the prior with respect to model parameters. Thus, the computational burden of computing the marginal likelihood scales with the dimension of the parameter space. In phylogenetics, where we work with tree topologies that are high-dimensional models, standard approaches to computing marginal likelihoods are very slow. Here, we study methods to quickly compute the marginal likelihood of a single fixed tree topology. We benchmark the speed and accuracy of 19 different methods to compute the marginal likelihood of phylogenetic topologies on a suite of real data sets under the JC69 model. These methods include several new ones that we develop explicitly to solve this problem, as well as existing algorithms that we apply to phylogenetic models for the first time. Altogether, our results show that the accuracy of these methods varies widely, and that accuracy does not necessarily correlate with computational burden. Our newly developed methods are orders of magnitude faster than standard approaches, and in some cases, their accuracy rivals the best established estimators.


Sign in / Sign up

Export Citation Format

Share Document