Systematically Interrogating Agent-Based Models

Author(s):  
Michael Laver ◽  
Ernest Sergenti

This chapter develops the methods for designing, executing, and analyzing large suites of computer simulations that generate stable and replicable results. It starts with a discussion of the different methods of experimental design, such as grid sweeping and Monte Carlo parameterization. Next, it demonstrates how to calculate mean estimates of output variables of interest. It does so by first discussing stochastic processes, Markov Chain representations, and model burn-in. It focuses on three stochastic process representations: nonergodic deterministic processes that converge on a single state; nondeterministic stochastic processes for which a time average provides a representative estimate of the output variables; and nondeterministic stochastic processes for which a time average does not provide a representative estimate of the output variables. The estimation strategy employed depends on which stochastic process the simulation follows. Lastly, the chapter presents a set of diagnostic checks used to establish an appropriate sample size for the estimation of the means.

1987 ◽  
Vol 24 (02) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn ), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn ) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


2012 ◽  
Vol 12 (3) ◽  
pp. 691-702 ◽  
Author(s):  
Alexandros Sopasakis

AbstractWe introduce a lattice-free hard sphere exclusion stochastic process. The resulting stochastic rates are distance based instead of cell based. The corresponding Markov chain build for this many particle system is updated using an adaptation of the kinetic Monte Carlo method. It becomes quickly apparent that due to the lattice-free environment, and because of that alone, the dynamics behave differently than those in the lattice-based environment. This difference becomes increasingly larger with respect to particle densities/temperatures. The well-known packing problem and its solution (Palasti conjecture) seem to validate the resulting lattice-free dynamics.


1987 ◽  
Vol 24 (2) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


Author(s):  
Thomas Lux

AbstractOver the last decade, agent-based models in economics have reached a state of maturity that brought the tasks of statistical inference and goodness-of-fit of such models on the agenda of the research community. While most available papers have pursued a frequentist approach adopting either likelihood-based algorithms or simulated moment estimators, here we explore Bayesian estimation using a Markov chain Monte Carlo approach (MCMC). One major problem in the design of MCMC estimators is finding a parametrization that leads to a reasonable acceptance probability for new draws from the proposal density. With agent-based models the appropriate choice of the proposal density and its parameters becomes even more complex since such models often require a numerical approximation of the likelihood. This brings in additional factors affecting the acceptance rate as it will also depend on the approximation error of the likelihood. In this paper, we take advantage of a number of recent innovations in MCMC: We combine Particle Filter Markov Chain Monte Carlo as proposed by Andrieu et al. (J R Stat Soc B 72(Part 3):269–342, 2010) with adaptive choice of the proposal distribution and delayed rejection in order to identify an appropriate design of the MCMC estimator. We illustrate the methodology using two well-known behavioral asset pricing models.


2019 ◽  
Vol 62 (3) ◽  
pp. 577-586 ◽  
Author(s):  
Garnett P. McMillan ◽  
John B. Cannon

Purpose This article presents a basic exploration of Bayesian inference to inform researchers unfamiliar to this type of analysis of the many advantages this readily available approach provides. Method First, we demonstrate the development of Bayes' theorem, the cornerstone of Bayesian statistics, into an iterative process of updating priors. Working with a few assumptions, including normalcy and conjugacy of prior distribution, we express how one would calculate the posterior distribution using the prior distribution and the likelihood of the parameter. Next, we move to an example in auditory research by considering the effect of sound therapy for reducing the perceived loudness of tinnitus. In this case, as well as most real-world settings, we turn to Markov chain simulations because the assumptions allowing for easy calculations no longer hold. Using Markov chain Monte Carlo methods, we can illustrate several analysis solutions given by a straightforward Bayesian approach. Conclusion Bayesian methods are widely applicable and can help scientists overcome analysis problems, including how to include existing information, run interim analysis, achieve consensus through measurement, and, most importantly, interpret results correctly. Supplemental Material https://doi.org/10.23641/asha.7822592


Sign in / Sign up

Export Citation Format

Share Document