scholarly journals Quantifying uncertainty in parameter estimates for stochastic models of collective cell spreading using approximate Bayesian computation

2015 ◽  
Vol 263 ◽  
pp. 133-142 ◽  
Author(s):  
Brenda N. Vo ◽  
Christopher C. Drovandi ◽  
Anthony N. Pettitt ◽  
Matthew J. Simpson
2019 ◽  
Author(s):  
Evgeny Tankhilevich ◽  
Jonathan Ish-Horowicz ◽  
Tara Hameed ◽  
Elisabeth Roesch ◽  
Istvan Kleijn ◽  
...  

ABSTRACTApproximate Bayesian computation (ABC) is an important framework within which to infer the structure and parameters of a systems biology model. It is especially suitable for biological systems with stochastic and nonlinear dynamics, for which the likelihood functions are intractable. However, the associated computational cost often limits ABC to models that are relatively quick to simulate in practice. We here present a Julia package, GpABC, that implements parameter inference and model selection for deterministic or stochastic models using i) standard rejection ABC or ABC-SMC, or ii) ABC with Gaussian process emulation. The latter significantly reduces the computational cost.URL: https://github.com/tanhevg/GpABC.jl


2020 ◽  
Vol 5 ◽  
Author(s):  
Nikolai Bode

Simulation models for pedestrian crowds are a ubiquitous tool in research and industry. It is crucial that the parameters of these models are calibrated carefully and ultimately it will be of interest to compare competing models to decide which model is best suited for a particular purpose. In this contribution, I demonstrate how Approximate Bayesian Computation (ABC), which is already a popular tool in other areas of science, can be used for model fitting and model selection in a pedestrian dynamics context. I fit two different models for pedestrian dynamics to data on a crowd passing in one direction through a bottleneck. One model describes movement in continuous-space, the other model is a cellular automaton and thus describes movement in discrete-space. In addition, I compare models to data using two metrics. The first is based on egress times and the second on the velocity of pedestrians in front of the bottleneck. My results show that while model fitting is successful, a substantial degree of uncertainty about the value of some model parameters remains after model fitting. Importantly, the choice of metric in model fitting can influence parameter estimates. Model selection is inconclusive for the egress time metric but supports the continuous-space model for the velocity-based metric. These findings show that ABC is a flexible approach and highlights the difficulties associated with model fitting and model selection for pedestrian dynamics. ABC requires many simulation runs and choosing appropriate metrics for comparing data to simulations requires careful attention. Despite this, I suggest ABC is a promising tool, because it is versatile and easily implemented for the growing number of openly available crowd simulators and data sets.


2020 ◽  
Author(s):  
Yannik Schälte ◽  
Jan Hasenauer

AbstractMotivationApproximate Bayesian Computation (ABC) is an increasingly popular method for likelihood-free parameter inference in systems biology and other fields of research, since it allows analysing complex stochastic models. However, the introduced approximation error is often not clear. It has been shown that ABC actually gives exact inference under the implicit assumption of a measurement noise model. Noise being common in biological systems, it is intriguing to exploit this insight. But this is difficult in practice, since ABC is in general highly computationally demanding. Thus, the question we want to answer here is how to efficiently account for measurement noise in ABC.ResultsWe illustrate exemplarily how ABC yields erroneous parameter estimates when neglecting measurement noise. Then, we discuss practical ways of correctly including the measurement noise in the analysis. We present an efficient adaptive sequential importance sampling based algorithm applicable to various model types and noise models. We test and compare it on several models, including ordinary and stochastic differential equations, Markov jump processes, and stochastically interacting agents, and noise models including normal, Laplace, and Poisson noise. We conclude that the proposed algorithm could improve the accuracy of parameter estimates for a broad spectrum of applications.AvailabilityThe developed algorithms are made publicly available as part of the open-source python toolbox pyABC (https://github.com/icb-dcm/pyabc)[email protected] informationSupplementary information is available at bioRxiv online. Supplementary code and data are available online at http://doi.org/10.5281/zenodo.3631120.


Open Biology ◽  
2014 ◽  
Vol 4 (9) ◽  
pp. 140097 ◽  
Author(s):  
Stuart T. Johnston ◽  
Matthew J. Simpson ◽  
D. L. Sean McElwain ◽  
Benjamin J. Binder ◽  
Joshua V. Ross

Quantifying the impact of biochemical compounds on collective cell spreading is an essential element of drug design, with various applications including developing treatments for chronic wounds and cancer. Scratch assays are a technically simple and inexpensive method used to study collective cell spreading; however, most previous interpretations of scratch assays are qualitative and do not provide estimates of the cell diffusivity, D , or the cell proliferation rate, λ . Estimating D and λ is important for investigating the efficacy of a potential treatment and provides insight into the mechanism through which the potential treatment acts. While a few methods for estimating D and λ have been proposed, these previous methods lead to point estimates of D and λ , and provide no insight into the uncertainty in these estimates. Here, we compare various types of information that can be extracted from images of a scratch assay, and quantify D and λ using discrete computational simulations and approximate Bayesian computation. We show that it is possible to robustly recover estimates of D and λ from synthetic data, as well as a new set of experimental data. For the first time, our approach also provides a method to estimate the uncertainty in our estimates of D and λ . We anticipate that our approach can be generalized to deal with more realistic experimental scenarios in which we are interested in estimating D and λ , as well as additional relevant parameters such as the strength of cell-to-cell adhesion or the strength of cell-to-substrate adhesion.


2020 ◽  
Vol 36 (Supplement_1) ◽  
pp. i551-i559
Author(s):  
Yannik Schälte ◽  
Jan Hasenauer

Abstract Motivation Approximate Bayesian computation (ABC) is an increasingly popular method for likelihood-free parameter inference in systems biology and other fields of research, as it allows analyzing complex stochastic models. However, the introduced approximation error is often not clear. It has been shown that ABC actually gives exact inference under the implicit assumption of a measurement noise model. Noise being common in biological systems, it is intriguing to exploit this insight. But this is difficult in practice, as ABC is in general highly computationally demanding. Thus, the question we want to answer here is how to efficiently account for measurement noise in ABC. Results We illustrate exemplarily how ABC yields erroneous parameter estimates when neglecting measurement noise. Then, we discuss practical ways of correctly including the measurement noise in the analysis. We present an efficient adaptive sequential importance sampling-based algorithm applicable to various model types and noise models. We test and compare it on several models, including ordinary and stochastic differential equations, Markov jump processes and stochastically interacting agents, and noise models including normal, Laplace and Poisson noise. We conclude that the proposed algorithm could improve the accuracy of parameter estimates for a broad spectrum of applications. Availability and implementation The developed algorithms are made publicly available as part of the open-source python toolbox pyABC (https://github.com/icb-dcm/pyabc). Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Anubhav Gupta ◽  
Owen Petchey

1) Food web models explain and predict the trophic interactions in a food web, and they can infer missing interactions among the organisms. The allometric diet breadth model (ADBM) is a food web model based on the foraging theory. In the ADBM the foraging parameters are allometrically scaled to body sizes of predators and prey. In Petchey et al. (2008), the parameterisation of the ADBM had two limitations: (a) the model parameters were point estimates, and (b) food web connectance was not estimated. 2) The novelty of our current approach is: (a) we consider multiple predictions from the ADBM by parameterising it with approximate Bayesian computation, to estimate parameter distributions and not point estimates. (b) Connectance emerges from the parameterisation, by measuring model fit using the true skill statistic, which takes into account prediction of both the presences and absences of links. 3) We fit the ADBM using approximate Bayesian computation to 16 observed food webs from a wide variety of ecosystems. Connectance was consistently overestimated in the new parameterisation method. In some of the food webs, considerable variation in estimated parameter distributions occurred, and resulted in considerable variation (i.e. uncertainty) in predicted food web structure. 4) We conclude that the observed food web data is likely missing some trophic links that do actually occur, and that the ADBM likely predicts some links that do not exist. The latter could be addressed by accounting in the ADBM for additional traits other than body size. Further work could also address the significance of uncertainty in parameter estimates for predicted food web responses to environmental change.


Author(s):  
Cecilia Viscardi ◽  
Michele Boreale ◽  
Fabio Corradi

AbstractWe consider the problem of sample degeneracy in Approximate Bayesian Computation. It arises when proposed values of the parameters, once given as input to the generative model, rarely lead to simulations resembling the observed data and are hence discarded. Such “poor” parameter proposals do not contribute at all to the representation of the parameter’s posterior distribution. This leads to a very large number of required simulations and/or a waste of computational resources, as well as to distortions in the computed posterior distribution. To mitigate this problem, we propose an algorithm, referred to as the Large Deviations Weighted Approximate Bayesian Computation algorithm, where, via Sanov’s Theorem, strictly positive weights are computed for all proposed parameters, thus avoiding the rejection step altogether. In order to derive a computable asymptotic approximation from Sanov’s result, we adopt the information theoretic “method of types” formulation of the method of Large Deviations, thus restricting our attention to models for i.i.d. discrete random variables. Finally, we experimentally evaluate our method through a proof-of-concept implementation.


Sign in / Sign up

Export Citation Format

Share Document