joint posterior distribution
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 11)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 3 ◽  
Author(s):  
Dimitrios Kiagias ◽  
Giulia Russo ◽  
Giuseppe Sgroi ◽  
Francesco Pappalardo ◽  
Miguel A. Juárez

We propose a Bayesian hierarchical method for combining in silico and in vivo data onto an augmented clinical trial with binary end points. The joint posterior distribution from the in silico experiment is treated as a prior, weighted by a measure of compatibility of the shared characteristics with the in vivo data. We also formalise the contribution and impact of in silico information in the augmented trial. We illustrate our approach to inference with in silico data from the UISS-TB simulator, a bespoke simulator of virtual patients with tuberculosis infection, and synthetic physical patients from a clinical trial.


Author(s):  
Federico Castelletti ◽  
Alessandro Mascaro

AbstractBayesian networks in the form of Directed Acyclic Graphs (DAGs) represent an effective tool for modeling and inferring dependence relations among variables, a process known as structural learning. In addition, when equipped with the notion of intervention, a causal DAG model can be adopted to quantify the causal effect on a response due to a hypothetical intervention on some variable. Observational data cannot distinguish between DAGs encoding the same set of conditional independencies (Markov equivalent DAGs), which however can be different from a causal perspective. In addition, because causal effects depend on the underlying network structure, uncertainty around the DAG generating model crucially affects the causal estimation results. We propose a Bayesian methodology which combines structural learning of Gaussian DAG models and inference of causal effects as arising from simultaneous interventions on any given set of variables in the system. Our approach fully accounts for the uncertainty around both the network structure and causal relationships through a joint posterior distribution over DAGs, DAG parameters and then causal effects.


2021 ◽  
Author(s):  
Ulrich Callies ◽  
Christopher G. Albert ◽  
Udo von Toussaint

Abstract We address the analysis and proper representation of posterior dependence among parameters obtained from model calibration. A simple water quality model for the Elbe River (Germany) is referred to as an example. The joint posterior distribution of six model parameters is estimated by Markov Chain Monte Carlo sampling based on a quadratic likelihood function. The estimated distribution shows to which extent model parameters are controlled by observations, highlighting issues that cannot be settled unless more information becomes available. In our example, some vagueness occurs due to problems in distinguishing between the effects of either growth limitation by lack of silica or a temperature dependent algal loss rate. Knowing such indefiniteness of the model structure is crucial when the model is to be used in support of management options. Bayesian network technology can be employed to convey this information in a transparent way.


2020 ◽  
Author(s):  
Oliver Lüdtke ◽  
Esther Ulitzsch ◽  
Alexander Robitzsch

With small to modest sample sizes and complex models, maximum likelihood (ML) estimation of confirmatory factor analysis (CFA) models can show serious estimation problems such as nonconvergence or parameter estimates that are outside the admissible parameter space. In the present article, we discuss two Bayesian estimation methods for stabilizing parameter estimates of a CFA: Penalized maximum likelihood (PML) estimation and Markov Chain Monte Carlo (MCMC) methods. We clarify that these use different Bayesian point estimates from the joint posterior distribution—the mode (PML) of the joint posterior distribution, and the mean (EAP) or mode (MAP) of the marginal posterior distribution—and discuss under which conditions the two methods produce different results. In a simulation study, we show that the MCMC method clearly outperforms PML and that these performance gains can be explained by the fact that MCMC uses the EAP as a point estimate. We also argue that it is often advantageous to choose a parameterization in which the main parameters of interest are bounded and suggest the four-parameter beta distribution as a prior distribution for loadings and correlations. Using simulated data, we show that selecting weakly informative four-parameter beta priors can further stabilize parameter estimates, even in cases when the priors were mildly misspecified. Finally, we derive recommendations and propose directions for further research.


Author(s):  
Xu Shi ◽  
Andrew F Neuwald ◽  
Xiao Wang ◽  
Tian-Li Wang ◽  
Leena Hilakivi-Clarke ◽  
...  

Abstract Motivation High-throughput RNA sequencing has revolutionized the scope and depth of transcriptome analysis. Accurate reconstruction of a phenotype-specific transcriptome is challenging due to the noise and variability of RNA-seq data. This requires computational identification of transcripts from multiple samples of the same phenotype, given the underlying consensus transcript structure. Results We present a Bayesian method, integrated assembly of phenotype-specific transcripts (IntAPT), that identifies phenotype-specific isoforms from multiple RNA-seq profiles. IntAPT features a novel two-layer Bayesian model to capture the presence of isoforms at the group layer and to quantify the abundance of isoforms at the sample layer. A spike-and-slab prior is used to model the isoform expression and to enforce the sparsity of expressed isoforms. Dependencies between the existence of isoforms and their expression are modeled explicitly to facilitate parameter estimation. Model parameters are estimated iteratively using Gibbs sampling to infer the joint posterior distribution, from which the presence and abundance of isoforms can reliably be determined. Studies using both simulations and real datasets show that IntAPT consistently outperforms existing methods for the IntAPT. Experimental results demonstrate that, despite sequencing errors, IntAPT exhibits a robust performance among multiple samples, resulting in notably improved identification of expressed isoforms of low abundance. Availability and implementation The IntAPT package is available at http://github.com/henryxushi/IntAPT. Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Vol 49 (4) ◽  
pp. 46-56
Author(s):  
Aliaksandr Hubin ◽  
Geir O. Storvik ◽  
Paul E. Grini ◽  
Melinka A. Butenko

Epigenetic observations are represented by the total number of reads from a given pool of cells and the number of methylated reads, making it reasonable to model this data by a binomial distribution. There are numerous factors that can influence the probability of success in a particular region. Moreover, there is a strong spatial (alongside the genome) dependence of these probabilities. We incorporate dependence on the covariates and the spatial dependence of the methylation probability for observations from a pool of cells by means of a binomial regression model with a latent Gaussian field and a logit link function. We apply a Bayesian approach including prior specifications on model configurations. We run a mode jumping Markov chain Monte Carlo algorithm (MJMCMC) across different choices of covariates in order to obtain the joint posterior distribution of parameters and models. This also allows finding the best set of covariates to model methylation probability within the genomic region of interest and individual marginal inclusion probabilities of the covariates.


2020 ◽  
Vol 34 (04) ◽  
pp. 6973-6980
Author(s):  
Yizhou Zhou ◽  
Xiaoyan Sun ◽  
Chong Luo ◽  
Zheng-Jun Zha ◽  
Wenjun Zeng

The emergence of neural architecture search (NAS) has greatly advanced the research on network design. Recent proposals such as gradient-based methods or one-shot approaches significantly boost the efficiency of NAS. In this paper, we formulate the NAS problem from a Bayesian perspective. We propose explicitly estimating the joint posterior distribution over pairs of network architecture and weights. Accordingly, a hybrid network representation is presented which enables us to leverage the Variational Dropout so that the approximation of the posterior distribution becomes fully gradient-based and highly efficient. A posterior-guided sampling method is then presented to sample architecture candidates and directly make evaluations. As a Bayesian approach, our posterior-guided NAS (PGNAS) avoids tuning a number of hyper-parameters and enables a very effective architecture sampling in posterior probability space. Interestingly, it also leads to a deeper insight into the weight sharing used in the one-shot NAS and naturally alleviates the mismatch between the sampled architecture and weights caused by the weight sharing. We validate our PGNAS method on the fundamental image classification task. Results on Cifar-10, Cifar-100 and ImageNet show that PGNAS achieves a good trade-off between precision and speed of search among NAS methods. For example, it takes 11 GPU days to search a very competitive architecture with 1.98% and 14.28% test errors on Cifar10 and Cifar100, respectively.


Author(s):  
Olawale B. Akanbi ◽  
Olusanya E. Olubusoye ◽  
Samuel A. Babatunde

Bayes factor is a major Bayesian tool for model comparison especially when the model priors are the same. In this paper, the Savage-Dickey Density Ratio (SDDR) is used to derive the Bayes factor to select a model from two competing models under consideration in a normal linear regression with an independent normal-gamma prior. The Gibbs sampling technique for the joint posterior distribution with equal prior precision for both the unrestricted and restricted models is used to obtain the model estimates. The result shows that the Bayes factor gave more support to the unrestricted model against the restricted and was consistent irrespective of changes in sample size.


2019 ◽  
Vol 45 (1) ◽  
pp. 58-85
Author(s):  
Wim J. van der Linden ◽  
Hao Ren

The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee’s ability and optimally select the items. Thanks to extremely rapid convergence of the Markov chain and simple posterior calculations, the algorithm is ready for use in real-world adaptive testing with running times fully comparable with algorithms that fix all parameters at point estimates during testing.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter introduces Markov Chain Monte Carlo (MCMC) with Gibbs sampling, revisiting the “Maple Syrup Problem” of Chapter 12, where the goal was to estimate the two parameters of a normal distribution, μ‎ and σ‎. Chapter 12 used the normal-normal conjugate to derive the posterior distribution for the unknown parameter μ‎; the parameter σ‎ was assumed to be known. This chapter uses MCMC with Gibbs sampling to estimate the joint posterior distribution of both μ‎ and σ‎. Gibbs sampling is a special case of the Metropolis–Hastings algorithm. The chapter describes MCMC with Gibbs sampling step by step, which requires (1) computing the posterior distribution of a given parameter, conditional on the value of the other parameter, and (2) drawing a sample from the posterior distribution. In this chapter, Gibbs sampling makes use of the conjugate solutions to decompose the joint posterior distribution into full conditional distributions for each parameter.


Sign in / Sign up

Export Citation Format

Share Document