posterior distribution
Recently Published Documents


TOTAL DOCUMENTS

346
(FIVE YEARS 115)

H-INDEX

24
(FIVE YEARS 5)

2021 ◽  
Vol 10 (3) ◽  
pp. 413-422
Author(s):  
Nur Azizah ◽  
Sugito Sugito ◽  
Hasbi Yasin

Hospital service facilities cannot be separated from queuing events. Queues are an unavoidable part of life, but they can be minimized with a good system. The purpose of this study was to find out how the queuing system at Dr. Kariadi. Bayesian method is used to combine previous research and this research in order to obtain new information. The sample distribution and prior distribution obtained from previous studies are combined with the sample likelihood function to obtain a posterior distribution. After calculating the posterior distribution, it was found that the queuing model in the outpatient installation at Dr. Kariadi Semarang is (G/G/c): (GD/∞/∞) where each polyclinic has met steady state conditions and the level of busyness is greater than the unemployment rate so that the queuing system at Dr. Kariadi is categorized as good, except in internal medicine poly. 


2021 ◽  
Vol 40 (3) ◽  
pp. 171-180
Author(s):  
Bruno Figliuzzi ◽  
Antoine Montaux-Lambert ◽  
François Willot ◽  
Grégoire Naudin ◽  
Pierre Dupuis ◽  
...  

Morphological models are commonly used to describe microstructures observed in heterogeneous materials. Usually, these models depend upon a set of parameters that must be chosen carefully to match experimental observations conducted on the microstructure. A common approach to perform the parameters determination is to try to minimize an objective function, usually taken to be the discrepancy between measurements computed on the simulations and on the experimental observations, respectively. In this article, we present a Bayesian approach for determining the parameters of morphological models, based upon the definition of a posterior distribution for the parameters. A Monte Carlo Markov Chains (MCMC) algorithm is then used to generate samples from the posterior distribution and to identify a set of optimal parameters. We show on several examples that the Bayesian approach allows us to properly identify the optimal parameters of distinct morphological models and to identify potential correlations between the parameters of the models.


2021 ◽  
Vol 132 (1) ◽  
Author(s):  
Mina Woo ◽  
Jeong Kim

AbstractThe development of a reliable numerical simulation is essential for understanding high-speed forming processes such as electrohydraulic forming (EHF). This numerical model should be created based on the accurate material properties. However, dynamic material properties at strain rates exceeding 1000 s$$^{-1}$$ - 1 cannot be easily obtained through an experimental approach. Thus, this study predicted two material parameters in the Cowper–Symonds constitutive equation based on inverse parameter estimation, such that the parameters predicted using the numerical simulation corresponded well with those obtained from the experimental results. The target material was a 1-mm-thick Al 5052-H32 sheet. The comparison target included the final deformation shape of the sheet in the EHF-free bulging test at three input voltages of 6, 7, and 8 kV. For the inverse parameter estimation, the posterior distribution for the two parameters included a likelihood and a prior distribution. For the likelihood construction, a reduced-order surrogate model was developed in advance to substitute the numerical simulation based on ordinary Kriging and principal component analysis. Moreover, the error distribution of the bulge height between the experiment and reduced-order surrogate model was obtained. The prior distribution at 7 kV was defined as a uniform distribution, and the posterior distribution at 7 kV was employed as a prior distribution at 6, 7, and 8 kV. Furthermore, Markov chain Monte Carlo sampling was employed and the Metropolis–Hastings algorithm was adopted to obtain the samples following the posterior distribution. After the autocorrelation calculation for sample independence, the lag with an autocorrelation of $$\pm \,0.02$$ ± 0.02 interval was selected and every lag$$^{\mathrm {th}}$$ th sample was obtained. The total number of acquired samples was $$10^{5}$$ 10 5 , and the mean values were calculated from the obtained samples. Consequently, the numerical simulation with mean values displayed good agreement with the experimental results.


Author(s):  
David Issa Mattos ◽  
Érika Martins Silva Ramos

AbstractThis article introduces the R package (Bayesian Paired Comparison in Stan) and the statistical models implemented in the package. This package aims to facilitate the use of Bayesian models for paired comparison data in behavioral research. Bayesian analysis of paired comparison data allows parameter estimation even in conditions where the maximum likelihood does not exist, allows easy extension of paired comparison models, provides straightforward interpretation of the results with credible intervals, has better control of type I error, has more robust evidence towards the null hypothesis, allows propagation of uncertainties, includes prior information, and performs well when handling models with many parameters and latent variables. The package provides a consistent interface for R users and several functions to evaluate the posterior distribution of all parameters to estimate the posterior distribution of any contest between items and to obtain the posterior distribution of the ranks. Three reanalyses of recent studies that used the frequentist Bradley–Terry model are presented. These reanalyses are conducted with the Bayesian models of the package, and all the code used to fit the models, generate the figures, and the tables are available in the online appendix.


2021 ◽  
Vol 32 (1) ◽  
Author(s):  
L. Mihaela Paun ◽  
Dirk Husmeier

AbstractWe propose to accelerate Hamiltonian and Lagrangian Monte Carlo algorithms by coupling them with Gaussian processes for emulation of the log unnormalised posterior distribution. We provide proofs of detailed balance with respect to the exact posterior distribution for these algorithms, and validate the correctness of the samplers’ implementation by Geweke consistency tests. We implement these algorithms in a delayed acceptance (DA) framework, and investigate whether the DA scheme can offer computational gains over the standard algorithms. A comparative evaluation study is carried out to assess the performance of the methods on a series of models described by differential equations, including a real-world application of a 1D fluid-dynamics model of the pulmonary blood circulation. The aim is to identify the algorithm which gives the best trade-off between accuracy and computational efficiency, to be used in nonlinear DE models, which are computationally onerous due to repeated numerical integrations in a Bayesian analysis. Results showed no advantage of the DA scheme over the standard algorithms with respect to several efficiency measures based on the effective sample size for most methods and DE models considered. These gradient-driven algorithms register a high acceptance rate, thus the number of expensive forward model evaluations is not significantly reduced by the first emulator-based stage of DA. Additionally, the Lagrangian Dynamical Monte Carlo and Riemann Manifold Hamiltonian Monte Carlo tended to register the highest efficiency (in terms of effective sample size normalised by the number of forward model evaluations), followed by the Hamiltonian Monte Carlo, and the No U-turn sampler tended to be the least efficient.


2021 ◽  
Author(s):  
Debora Y C Brandt ◽  
Xinzhu Wei ◽  
Yun Deng ◽  
Andrew H. Vaughn ◽  
Rasmus Nielsen

The ancestral recombination graph (ARG) is a structure that describes the joint genealogies of sampled DNA sequences along the genome. Recent computational methods have made impressive progress towards scalably estimating whole-genome genealogies. In addition to inferring the ARG, some of these methods can also provide ARGs sampled from a defined posterior distribution. Obtaining good samples of ARGs is crucial for quantifying statistical uncertainty and for estimating population genetic parameters such as effective population size, mutation rate, and allele age. Here, we use simulations to benchmark three popular ARG inference programs: ARGweaver, Relate, and tsdate. We use neutral coalescent simulations to 1) compare the true coalescence times to the inferred times at each locus; 2) compare the distribution of coalescence times across all loci to the expected exponential distribution; 3) evaluate whether the sampled coalescence times have the properties expected of a valid posterior distribution. We find that inferred coalescence times at each locus are more accurate in ARGweaver and Relate than in tsdate. However, all three methods tend to overestimate small coalescence times and underestimate large ones. Lastly, the posterior distribution of ARGweaver is closer to the expected posterior distribution than Relate's, but this higher accuracy comes at a substantial trade-off in scalability. The best choice of method will depend on the number and length of input sequences and on the goal of downstream analyses, and we provide guidelines for the best practices.


Computation ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 119
Author(s):  
Kathrin Hellmuth ◽  
Christian Klingenberg ◽  
Qin Li ◽  
Min Tang

Chemotaxis describes the movement of an organism, such as single or multi-cellular organisms and bacteria, in response to a chemical stimulus. Two widely used models to describe the phenomenon are the celebrated Keller–Segel equation and a chemotaxis kinetic equation. These two equations describe the organism’s movement at the macro- and mesoscopic level, respectively, and are asymptotically equivalent in the parabolic regime. The way in which the organism responds to a chemical stimulus is embedded in the diffusion/advection coefficients of the Keller–Segel equation or the turning kernel of the chemotaxis kinetic equation. Experiments are conducted to measure the time dynamics of the organisms’ population level movement when reacting to certain stimulation. From this, one infers the chemotaxis response, which constitutes an inverse problem. In this paper, we discuss the relation between both the macro- and mesoscopic inverse problems, each of which is associated with two different forward models. The discussion is presented in the Bayesian framework, where the posterior distribution of the turning kernel of the organism population is sought. We prove the asymptotic equivalence of the two posterior distributions.


2021 ◽  
Vol 3 ◽  
Author(s):  
Dimitrios Kiagias ◽  
Giulia Russo ◽  
Giuseppe Sgroi ◽  
Francesco Pappalardo ◽  
Miguel A. Juárez

We propose a Bayesian hierarchical method for combining in silico and in vivo data onto an augmented clinical trial with binary end points. The joint posterior distribution from the in silico experiment is treated as a prior, weighted by a measure of compatibility of the shared characteristics with the in vivo data. We also formalise the contribution and impact of in silico information in the augmented trial. We illustrate our approach to inference with in silico data from the UISS-TB simulator, a bespoke simulator of virtual patients with tuberculosis infection, and synthetic physical patients from a clinical trial.


2021 ◽  
Author(s):  
A. T. Barker ◽  
C. S. Lee ◽  
F. Forouzanfar ◽  
A. Guion ◽  
X.-H. Wu

Abstract We explore the problem of drawing posterior samples from a lognormal permeability field conditioned by noisy measurements at discrete locations. The underlying unconditioned samples are based on a scalable PDE-sampling technique that shows better scalability for large problems than the traditional Karhunen-Loeve sampling, while still allowing for consistent samples to be drawn on a hierarchy of spatial scales. Lognormal random fields produced in this scalable and hierarchical way are then conditioned to measured data by a randomized maximum likelihood approach to draw from a Bayesian posterior distribution. The algorithm to draw from the posterior distribution can be shown to be equivalent to a PDE-constrained optimization problem, which allows for some efficient computational solution techniques. Numerical results demonstrate the efficiency of the proposed methods. In particular, we are able to match statistics for a simple flow problem on the fine grid with high accuracy and at much lower cost on a scale of coarser grids.


Sign in / Sign up

Export Citation Format

Share Document