posterior density
Recently Published Documents


TOTAL DOCUMENTS

168
(FIVE YEARS 71)

H-INDEX

14
(FIVE YEARS 3)

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ali Algarni ◽  
Mohammed Elgarhy ◽  
Abdullah M Almarashi ◽  
Aisha Fayomi ◽  
Ahmed R El-Saeed

The challenge of estimating the parameters for the inverse Weibull (IW) distribution employing progressive censoring Type-I (PCTI) will be addressed in this study using Bayesian and non-Bayesian procedures. To address the issue of censoring time selection, qauntiles from the IW lifetime distribution will be implemented as censoring time points for PCTI. Focusing on the censoring schemes, maximum likelihood estimators (MLEs) and asymptotic confidence intervals (ACI) for unknown parameters are constructed. Under the squared error (SEr) loss function, Bayes estimates (BEs) and concomitant maximum posterior density credible interval estimations are also produced. The BEs are assessed using two methods: Lindley’s approximation (LiA) technique and the Metropolis-Hasting (MH) algorithm utilizing Markov Chain Monte Carlo (MCMC). The theoretical implications of MLEs and BEs for specified schemes of PCTI samples are shown via a simulation study to compare the performance of the different suggested estimators. Finally, application of two real data sets will be employed.


Author(s):  
Hiba Zeyada Muhammed ◽  
Essam Abd Elsalam Muhammed

In this paper, Bayesian and non-Bayesian estimation of the inverted Topp-Leone distribution shape parameter are studied when the sample is complete and random censored. The maximum likelihood estimator (MLE) and Bayes estimator of the unknown parameter are proposed. The Bayes estimates (BEs) have been computed based on the squared error loss (SEL) function and using Markov Chain Monte Carlo (MCMC) techniques. The asymptotic, bootstrap (p,t), and highest posterior density intervals are computed. The Metropolis Hasting algorithm is proposed for Bayes estimates. Monte Carlo simulation is performed to compare the performances of the proposed methods and one real data set has been analyzed for illustrative purposes.


2021 ◽  
Vol 3 (1) ◽  
pp. 10
Author(s):  
Riko Kelter

The Full Bayesian Significance Test (FBST) has been proposed as a convenient method to replace frequentist p-values for testing a precise hypothesis. Although the FBST enjoys various appealing properties, the purpose of this paper is to investigate two aspects of the FBST which are sometimes observed as measure-theoretic inconsistencies of the procedure and have not been discussed rigorously in the literature. First, the FBST uses the posterior density as a reference for judging the Bayesian statistical evidence against a precise hypothesis. However, under absolutely continuous prior distributions, the posterior density is defined only up to Lebesgue null sets which renders the reference criterion arbitrary. Second, the FBST statistical evidence seems to have no valid prior probability. It is shown that the former aspect can be circumvented by fixing a version of the posterior density before using the FBST, and the latter aspect is based on its measure-theoretic premises. An illustrative example demonstrates the two aspects and their solution. Together, the results in this paper show that both of the two aspects which are sometimes observed as measure-theoretic inconsistencies of the FBST are not tenable. The FBST thus provides a measure-theoretically coherent Bayesian alternative for testing a precise hypothesis.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1558
Author(s):  
Ziyu Xiong ◽  
Wenhao Gui

The point and interval estimations for the unknown parameters of an exponentiated half-logistic distribution based on adaptive type II progressive censoring are obtained in this article. At the beginning, the maximum likelihood estimators are derived. Afterward, the observed and expected Fisher’s information matrix are obtained to construct the asymptotic confidence intervals. Meanwhile, the percentile bootstrap method and the bootstrap-t method are put forward for the establishment of confidence intervals. With respect to Bayesian estimation, the Lindley method is used under three different loss functions. The importance sampling method is also applied to calculate Bayesian estimates and construct corresponding highest posterior density (HPD) credible intervals. Finally, numerous simulation studies are conducted on the basis of Markov Chain Monte Carlo (MCMC) samples to contrast the performance of the estimations, and an authentic data set is analyzed for exemplifying intention.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2130
Author(s):  
Wisunee Puggard ◽  
Sa-Aat Niwitpong ◽  
Suparat Niwitpong

The Birnbaum–Saunders (BS) distribution, which is asymmetric with non-negative support, can be transformed to a normal distribution, which is symmetric. Therefore, the BS distribution is useful for describing data comprising values greater than zero. The coefficient of variation (CV), which is an important descriptive statistic for explaining variation within a dataset, has not previously been used for statistical inference on a BS distribution. The aim of this study is to present four methods for constructing confidence intervals for the CV, and the difference between the CVs of BS distributions. The proposed methods are based on the generalized confidence interval (GCI), a bootstrapped confidence interval (BCI), a Bayesian credible interval (BayCI), and the highest posterior density (HPD) interval. A Monte Carlo simulation study was conducted to evaluate their performances in terms of coverage probability and average length. The results indicate that the HPD interval was the best-performing method overall. PM 2.5 concentration data for Chiang Mai, Thailand, collected in March and April 2019, were used to illustrate the efficacies of the proposed methods, the results of which were in good agreement with the simulation study findings.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2703
Author(s):  
Ke Wu ◽  
Liang Wang ◽  
Li Yan ◽  
Yuhlong Lio

In this paper, statistical inference and prediction issue of left truncated and right censored dependent competing risk data are studied. When the latent lifetime is distributed by Marshall–Olkin bivariate Rayleigh distribution, the maximum likelihood estimates of unknown parameters are established, and corresponding approximate confidence intervals are also constructed by using a Fisher information matrix and asymptotic approximate theory. Furthermore, Bayesian estimates and associated high posterior density credible intervals of unknown parameters are provided based on general flexible priors. In addition, when there is an order restriction between unknown parameters, the point and interval estimates based on classical and Bayesian frameworks are discussed too. Besides, the prediction issue of a censored sample is addressed based on both likelihood and Bayesian methods. Finally, extensive simulation studies are conducted to investigate the performance of the proposed methods, and two real-life examples are presented for illustration purposes.


2021 ◽  
Vol 118 (44) ◽  
pp. e2113943118
Author(s):  
Tomohiko Sasaki ◽  
Sileshi Semaw ◽  
Michael J. Rogers ◽  
Scott W. Simpson ◽  
Yonas Beyene ◽  
...  

Accurate characterization of sexual dimorphism is crucial in evolutionary biology because of its significance in understanding present and past adaptations involving reproductive and resource use strategies of species. However, inferring dimorphism in fossil assemblages is difficult, particularly with relatively low dimorphism. Commonly used methods of estimating dimorphism levels in fossils include the mean method, the binomial dimorphism index, and the coefficient of variation method. These methods have been reported to overestimate low levels of dimorphism, which is problematic when investigating issues such as canine size dimorphism in primates and its relation to reproductive strategies. Here, we introduce the posterior density peak (pdPeak) method that utilizes the Bayesian inference to provide posterior probability densities of dimorphism levels and within-sex variance. The highest posterior density point is termed the pdPeak. We investigated performance of the pdPeak method and made comparisons with the above-mentioned conventional methods via 1) computer-generated samples simulating a range of conditions and 2) application to canine crown-diameter datasets of extant known-sex anthropoids. Results showed that the pdPeak method is capable of unbiased estimates in a broader range of dimorphism levels than the other methods and uniquely provides reliable interval estimates. Although attention is required to its underestimation tendency when some of the distributional assumptions are violated, we demonstrate that the pdPeak method enables a more accurate dimorphism estimate at lower dimorphism levels than previously possible, which is important to illuminating human evolution.


Sankhya A ◽  
2021 ◽  
Author(s):  
Gunnar Taraldsen

AbstractInference for correlation is central in statistics. From a Bayesian viewpoint, the final most complete outcome of inference for the correlation is the posterior distribution. An explicit formula for the posterior density for the correlation for the binormal is derived. This posterior is an optimal confidence distribution and corresponds to a standard objective prior. It coincides with the fiducial introduced by R.A. Fisher in 1930 in his first paper on fiducial inference. C.R. Rao derived an explicit elegant formula for this fiducial density, but the new formula using hypergeometric functions is better suited for numerical calculations. Several examples on real data are presented for illustration. A brief review of the connections between confidence distributions and Bayesian and fiducial inference is given in an Appendix.


2021 ◽  
Author(s):  
◽  
Andrés Ricardo Valdez

Like many other engineering applications, oil recovery and enhanced oil recovery are sensitive to the correct administration of economic resources. Pilot tests and core flood experiments are crucial elements to design an enhanced oil recovery (EOR) project. In this direction, numerical simulators are accessible alternatives for evaluating different engineering configurations at many diverse scales (pore, laboratory, and field scales). Despite the advantages that numerical simulators possess over laboratory experiences, they are not fully protected against uncertainties. In this thesis, we show advances in analyzing uncertainties in two-–phase reservoir simulations, focusing on foam–based EOR. The methods employed in this thesis analyze how experimental uncertainties affect reservoir simulator’s responses. Our framework for model calibration and uncertainty quantification uses the Markov Chain Monte Carlo method. The parametric uncertainty is tested against identifiability studies revealing situations where posterior density distributions with high variability are related to high uncertainties and practical non–identifiability issues. The model’s reliability was evaluated by adopting surrogate models based on polynomial chaos expansion when the computational cost was an issue for the analysis. Once we quantified the model’s output variability, we performed a global sensitivity analysis to map the model’s uncertainty to the input parameters distributions. Main and total Sobol indices were used to investigate the model’s uncertainty and highlight how key parameters and their interactions influence the simulation’s output. As a consequence of the results presented in this thesis, we show a technique for parameter and uncertainty estimation that can be explored to reduce the uncertainty in foam–assisted oil recovery models, which in turn can provide reliable computational simulations. Such conclusions are of utmost interest and relevance for the design of adequate techniques for enhanced oil recovery.


2021 ◽  
Vol 68 (4) ◽  
pp. 943-980 ◽  
Author(s):  
Nima Noii ◽  
Amirreza Khodadadian ◽  
Jacinto Ulloa ◽  
Fadi Aldakheel ◽  
Thomas Wick ◽  
...  

AbstractThe prediction of crack initiation and propagation in ductile failure processes are challenging tasks for the design and fabrication of metallic materials and structures on a large scale. Numerical aspects of ductile failure dictate a sub-optimal calibration of plasticity- and fracture-related parameters for a large number of material properties. These parameters enter the system of partial differential equations as a forward model. Thus, an accurate estimation of the material parameters enables the precise determination of the material response in different stages, particularly for the post-yielding regime, where crack initiation and propagation take place. In this work, we develop a Bayesian inversion framework for ductile fracture to provide accurate knowledge regarding the effective mechanical parameters. To this end, synthetic and experimental observations are used to estimate the posterior density of the unknowns. To model the ductile failure behavior of solid materials, we rely on the phase-field approach to fracture, for which we present a unified formulation that allows recovering different models on a variational basis. In the variational framework, incremental minimization principles for a class of gradient-type dissipative materials are used to derive the governing equations. The overall formulation is revisited and extended to the case of anisotropic ductile fracture. Three different models are subsequently recovered by certain choices of parameters and constitutive functions, which are later assessed through Bayesian inversion techniques. A step-wise Bayesian inversion method is proposed to determine the posterior density of the material unknowns for a ductile phase-field fracture process. To estimate the posterior density function of ductile material parameters, three common Markov chain Monte Carlo (MCMC) techniques are employed: (i) the Metropolis–Hastings algorithm, (ii) delayed-rejection adaptive Metropolis, and (iii) ensemble Kalman filter combined with MCMC. To examine the computational efficiency of the MCMC methods, we employ the $$\hat{R}{-}convergence$$ R ^ - c o n v e r g e n c e tool. The resulting framework is algorithmically described in detail and substantiated with numerical examples.


Sign in / Sign up

Export Citation Format

Share Document