likelihood approximation
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 9)

H-INDEX

12
(FIVE YEARS 2)

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254103
Author(s):  
Daniele de Brito Trindade ◽  
Patrícia Leone Espinheira ◽  
Klaus Leite Pinto Vasconcellos ◽  
Jalmar Manuel Farfán Carrasco ◽  
Maria do Carmo Soares de Lima

We propose in this paper a general class of nonlinear beta regression models with measurement errors. The motivation for proposing this model arose from a real problem we shall discuss here. The application concerns a usual oil refinery process where the main covariate is the concentration of a typically measured in error reagent and the response is a catalyst’s percentage of crystallinity involved in the process. Such data have been modeled by nonlinear beta and simplex regression models. Here we propose a nonlinear beta model with the possibility of the chemical reagent concentration being measured with error. The model parameters are estimated by different methods. We perform Monte Carlo simulations aiming to evaluate the performance of point and interval estimators of the model parameters. Both results of simulations and the application favors the method of estimation by maximum pseudo-likelihood approximation.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Alexander Fengler ◽  
Lakshmi N Govindarajan ◽  
Tony Chen ◽  
Michael J Frank

In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.


2020 ◽  
Author(s):  
Alexander Fengler ◽  
Lakshmi N. Govindarajan ◽  
Tony Chen ◽  
Michael J. Frank

AbstractIn cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 18
Author(s):  
Scott Cameron ◽  
Hans Eggers ◽  
Steve Kroon

Existing algorithms like nested sampling and annealed importance sampling are able to produce accurate estimates of the marginal likelihood of a model, but tend to scale poorly to large data sets. This is because these algorithms need to recalculate the log-likelihood for each iteration by summing over the whole data set. Efficient scaling to large data sets requires that algorithms only visit small subsets (mini-batches) of data on each iteration. To this end, we estimate the marginal likelihood via a sequential decomposition into a product of predictive distributions p ( y n | y < n ) . Predictive distributions can be approximated efficiently through Bayesian updating using stochastic gradient Hamiltonian Monte Carlo, which approximates likelihood gradients using mini-batches. Since each data point typically contains little information compared to the whole data set, the convergence to each successive posterior only requires a short burn-in phase. This approach can be viewed as a special case of sequential Monte Carlo (SMC) with a single particle, but differs from typical SMC methods in that it uses stochastic gradients. We illustrate how this approach scales favourably to large data sets with some simple models.


Author(s):  
Stephan Schlupkothen ◽  
Gerd Ascheid

Abstract The localization of multiple wireless agents via, for example, distance and/or bearing measurements is challenging, particularly if relying on beacon-to-agent measurements alone is insufficient to guarantee accurate localization. In these cases, agent-to-agent measurements also need to be considered to improve the localization quality. In the context of particle filtering, the computational complexity of tracking many wireless agents is high when relying on conventional schemes. This is because in such schemes, all agents’ states are estimated simultaneously using a single filter. To overcome this problem, the concept of multiple particle filtering (MPF), in which an individual filter is used for each agent, has been proposed in the literature. However, due to the necessity of considering agent-to-agent measurements, additional effort is required to derive information on each individual filter from the available likelihoods. This is necessary because the distance and bearing measurements naturally depend on the states of two agents, which, in MPF, are estimated by two separate filters. Because the required likelihood cannot be analytically derived in general, an approximation is needed. To this end, this work extends current state-of-the-art likelihood approximation techniques based on Gaussian approximation under the assumption that the number of agents to be tracked is fixed and known. Moreover, a novel likelihood approximation method is proposed that enables efficient and accurate tracking. The simulations show that the proposed method achieves up to 22% higher accuracy with the same computational complexity as that of existing methods. Thus, efficient and accurate tracking of wireless agents is achieved.


2019 ◽  
Vol 137 ◽  
pp. 115-132 ◽  
Author(s):  
Alexander Litvinenko ◽  
Ying Sun ◽  
Marc G. Genton ◽  
David E. Keyes

2019 ◽  
Vol 16 (153) ◽  
pp. 20180967 ◽  
Author(s):  
Zhixing Cao ◽  
Ramon Grima

Bayesian and non-Bayesian moment-based inference methods are commonly used to estimate the parameters defining stochastic models of gene regulatory networks from noisy single cell or population snapshot data. However, a systematic investigation of the accuracy of the predictions of these methods remains missing. Here, we present the results of such a study using synthetic noisy data of a negative auto-regulatory transcriptional feedback loop, one of the most common building blocks of complex gene regulatory networks. We study the error in parameter estimation as a function of (i) number of cells in each sample; (ii) the number of time points; (iii) the highest-order moment of protein fluctuations used for inference; (iv) the moment-closure method used for likelihood approximation. We find that for sample sizes typical of flow cytometry experiments, parameter estimation by maximizing the likelihood is as accurate as using Bayesian methods but with a much reduced computational time. We also show that the choice of moment-closure method is the crucial factor determining the maximum achievable accuracy of moment-based inference methods. Common likelihood approximation methods based on the linear noise approximation or the zero cumulants closure perform poorly for feedback loops with large protein–DNA binding rates or large protein bursts; this is exacerbated for highly heterogeneous cell populations. By contrast, approximating the likelihood using the linear-mapping approximation or conditional derivative matching leads to highly accurate parameter estimates for a wide range of conditions.


Sign in / Sign up

Export Citation Format

Share Document