scholarly journals Emulating Target Trials to Improve Causal Inference from Agent-Based Models

Author(s):  
Eleanor J Murray ◽  
Brandon D L Marshall ◽  
Ashley L Buchanan

Abstract Agent-based models are a key tool for investigating the emergent properties of population health settings, such as infectious disease transmission, where the exposure often violates the key ‘no interference’ assumption of traditional causal inference under the potential outcomes framework. Agent-based models and other simulation-based modeling approaches have generally been viewed as a separate knowledge-generating paradigm from the potential outcomes framework, but this can lead to confusion about how to interpret the results of these models in real-world settings. By explicitly incorporating the target trial framework into the development of an agent-based or other simulation model, we can clarify the causal parameters of interest, as well as make explicit the assumptions required for valid causal effect estimation within or between populations. In this paper, we describe the use of the target trial framework for designing agent-based models when the goal is estimation of causal effects in the presence of interference, or spillover.

2013 ◽  
Vol 1 (1) ◽  
pp. 1-20 ◽  
Author(s):  
Tyler J. VanderWeele ◽  
Miguel A. Hernan

Abstract: In this article, we discuss causal inference when there are multiple versions of treatment. The potential outcomes framework, as articulated by Rubin, makes an assumption of no multiple versions of treatment, and here we discuss an extension of this potential outcomes framework to accommodate causal inference under violations of this assumption. A variety of examples are discussed in which the assumption may be violated. Identification results are provided for the overall treatment effect and the effect of treatment on the treated when multiple versions of treatment are present and also for the causal effect comparing a version of one treatment to some other version of the same or a different treatment. Further identification and interpretative results are given for cases in which the version precedes the treatment as when an underlying treatment variable is coarsened or dichotomized to create a new treatment variable for which there are effectively “multiple versions”. Results are also given for effects defined by setting the version of treatment to a prespecified distribution. Some of the identification results bear resemblance to identification results in the literature on direct and indirect effects. We describe some settings in which ignoring multiple versions of treatment, even when present, will not lead to incorrect inferences.


Author(s):  
Sebastiaan Tieleman

AbstractAgent-based models provide a promising new tool in macroeconomic research. Questions have been raised, however, regarding the validity of such models. A methodology of macroeconomic agent-based model (MABM) validation, that provides a deeper understanding of validation practices, is required. This paper takes steps towards such a methodology by connecting three elements. First, is a foundation of model validation in general. Second is a classification of models dependent on how the model is validated. An important distinction in this classification is the difference between mechanism and target validation. Third, is a framework that revolves around the relationship between the structure of models of complex systems with emergent properties and validation in practice. Important in this framework is to consider MABMs as modelling multiple non-trivial levels. Connecting these three elements provides us with a methodology of the validation of MABMs and allows us to come to the following conclusions regarding MABM validation. First, in MABMs, mechanisms at a lower level are distinct from, but provide input to higher levels of mechanisms. Since mechanisms at different levels are validated in different ways we can come to a specific characterization of MABMs within the model classification framework. Second, because the mechanisms of MABMs are validated in a direct way at the level of the agent, MABMs can be seen as a move towards a more realist approach to modelling compared to DSGE.


2017 ◽  
Vol 186 (2) ◽  
pp. 131-142 ◽  
Author(s):  
Eleanor J. Murray ◽  
James M. Robins ◽  
George R. Seage ◽  
Kenneth A. Freedberg ◽  
Miguel A. Hernán

2020 ◽  
Author(s):  
Sean L. Wu ◽  
Andrew J. Dolgert ◽  
Joseph A. Lewnard ◽  
John M. Marshall ◽  
David L. Smith

AbstractAfter more than a century of sustained work by mathematicians, biologists, epidemiologists, probabilists, and other experts, dynamic models have become a vital tool for understanding and describing epidemics and disease transmission systems. Such models fulfill a variety of crucial roles including data integration, estimation of disease burden, forecasting trends, counterfactual evaluation, and parameter estimation. These models often incorporate myriad details, from age and social structure to inform population mixing patterns, commuting and migration, and immunological dynamics, among others. This complexity can be daunting, so many researchers have turned to stochastic simulation using agent-based models. Developing agent-based models, however, can present formidable technical challenges. In particular, depending on how the model updates state, unwanted or even unnoticed approximations can be introduced into a simulation model. In this article, we present computational methods for approximating continuous time discrete event stochastic processes based on a discrete time step to speed up complicated simulations which also converges to the true process as the time step goes to zero. Our stochastic models is constructed via hazard functions, and only those hazards which are dependent on the state of other agents (such as infection) are approximated, whereas hazards governing dynamics internal to an agent (such as immune response) are simulated exactly. By partitioning hazards as being either dependent or internal, a generic algorithm can be presented which is applicable to many models of contagion processes, with natural areas of extension and optimization.Author summaryStochastic simulation of epidemics is crucial to a variety of tasks in public health, encompassing intervention evaluation, trend forecasting, and estimation of epidemic parameters, among others. In many situations, due to model complexity, time constraints, unavailability or unfamiliarity with existing software, or other reasons, agent-based models are used to simulate epidemic processes. However, many simulation algorithms are ad hoc, which may introduce unwanted or unnoticed approximations. We present a method to build approximate, agent-based models from mathematical descriptions of stochastic epidemic processes which will improve simulation speed and converge to exact simulation techniques in limiting cases. The simplicity and generality of our method should be widely applicable to various problems in mathematical epidemiology and its connection to other methods developed in chemical physics should inspire future work and elaboration.


2019 ◽  
Vol 189 (3) ◽  
pp. 175-178 ◽  
Author(s):  
Tyler J VanderWeele

Abstract There are tensions inherent between many of the social exposures examined within social epidemiology and the assumptions embedded in quantitative potential-outcomes-based causal inference framework. The potential-outcomes framework characteristically requires a well-defined hypothetical intervention. As noted by Galea and Hernán (Am J Epidemiol. 2020;189(3):167–170), for many social exposures, such well-defined hypothetical exposures do not exist or there is no consensus on what they might be. Nevertheless, the quantitative potential-outcomes framework can still be useful for the study of some of these social exposures by creative adaptations that 1) redefine the exposure, 2) separate the exposure from the hypothetical intervention, or 3) allow for a distribution of hypothetical interventions. These various approaches and adaptations are reviewed and discussed. However, even these approaches have their limits. For certain important historical and social determinants of health such as social movements or wars, the quantitative potential-outcomes framework with well-defined hypothetical interventions is the wrong tool. Other modes of inquiry are needed.


2015 ◽  
Vol 3 (2) ◽  
pp. 207-236 ◽  
Author(s):  
Denis Talbot ◽  
Geneviève Lefebvre ◽  
Juli Atherton

AbstractEstimating causal exposure effects in observational studies ideally requires the analyst to have a vast knowledge of the domain of application. Investigators often bypass difficulties related to the identification and selection of confounders through the use of fully adjusted outcome regression models. However, since such models likely contain more covariates than required, the variance of the regression coefficient for exposure may be unnecessarily large. Instead of using a fully adjusted model, model selection can be attempted. Most classical statistical model selection approaches, such as Bayesian model averaging, do not readily address causal effect estimation. We present a new model averaged approach to causal inference, Bayesian causal effect estimation (BCEE), which is motivated by the graphical framework for causal inference. BCEE aims to unbiasedly estimate the causal effect of a continuous exposure on a continuous outcome while being more efficient than a fully adjusted approach.


2018 ◽  
Vol 43 (5) ◽  
pp. 540-567 ◽  
Author(s):  
Jiannan Lu ◽  
Peng Ding ◽  
Tirthankar Dasgupta

Assessing the causal effects of interventions on ordinal outcomes is an important objective of many educational and behavioral studies. Under the potential outcomes framework, we can define causal effects as comparisons between the potential outcomes under treatment and control. However, unfortunately, the average causal effect, often the parameter of interest, is difficult to interpret for ordinal outcomes. To address this challenge, we propose to use two causal parameters, which are defined as the probabilities that the treatment is beneficial and strictly beneficial for the experimental units. However, although well-defined for any outcomes and of particular interest for ordinal outcomes, the two aforementioned parameters depend on the association between the potential outcomes and are therefore not identifiable from the observed data without additional assumptions. Echoing recent advances in the econometrics and biostatistics literature, we present the sharp bounds of the aforementioned causal parameters for ordinal outcomes, under fixed marginal distributions of the potential outcomes. Because the causal estimands and their corresponding sharp bounds are based on the potential outcomes themselves, the proposed framework can be flexibly incorporated into any chosen models of the potential outcomes and is directly applicable to randomized experiments, unconfounded observational studies, and randomized experiments with noncompliance. We illustrate our methodology via numerical examples and three real-life applications related to educational and behavioral research.


Author(s):  
Ashley L Buchanan ◽  
S Bessey ◽  
William C Goedel ◽  
Maximilian King ◽  
Eleanor J Murray ◽  
...  

Abstract Pre-exposure prophylaxis (PrEP) for HIV prevention may not only benefit the individual who uses it, but also their uninfected sexual risk contacts. We developed an agent-based model using a novel trial emulation approach to quantify disseminated effects of PrEP use among men who have sex with men in Atlanta, USA from 2015 to 2017. Components (subsets of agents connected through partnerships in a sexual network, but not sharing partnerships with any other agents) were first randomized to an intervention coverage level or control, then within intervention components, eligible agents were randomized to PrEP. We estimated direct and disseminated (indirect) effects using randomization-based estimators and reported corresponding 95% simulation intervals across scenarios ranging from 10% to 90% coverage in the intervention components. A population of 11,245 agents was simulated with an average of 1,551 components identified. Comparing agents randomized to PrEP in 70% coverage components to control agents, there was a 15% disseminated risk reduction in HIV incidence (95% simulation intervals = 0.65, 1.05). Individuals not on PrEP may receive a protective benefit by being in a sexual network with higher PrEP coverage. Agent-based models are useful to evaluate possible direct and disseminated effects of HIV prevention modalities in sexual networks.


2017 ◽  
Author(s):  
Stefan Öberg

Twin births are a well-known and widespread example of a so-called “natural experiment”. Instrumental variables based on twin births have been used in many studies to estimate the causal effect of the number of children on the parents or siblings. I use the potential outcomes framework to show that these instrumental variables do not work as intended. They are fundamentally flawed and will always lead to severely biased estimates without any meaningful interpretation. This has been overlooked in previous research because too little attention has been paid to defining the treatment in this natural experiment. I analyze three different possible interpretations of the treatment and show that they all lead to inherent violations of the necessary assumptions. The effect of the number of on the parents or siblings is a policy relevant and theoretically important issue. The scientific record should therefore be corrected to not lead to misguided decisions.


Sociology ◽  
2020 ◽  
Author(s):  
Pablo Geraldo Bastías ◽  
Jennie E. Brand

Causal inference is a growing interdisciplinary subfield in statistics, computer science, economics, epidemiology, and the social sciences. In contrast with both traditional quantitative methods and cutting-edge approaches like machine learning, causal inference questions are defined in relation to potential outcomes, or variable values that are counterfactual to the observed world and therefore cannot be answered from joint probabilities alone, even with infinite data. The fact that one can possibly observe at most one potential outcome among those of interest is known as the “fundamental problem of causal inference.” For example, in this framework, the economic return to college education can be defined as a comparison between two potential outcomes: the wages of an individual with a college education versus the wages that the same individual would have received had he or she not attended college. In general, researchers are interested in estimating such effects for certain groups and comparing the effects for different subpopulations. Critical to causal inference is recognizing that, to answer causal questions from observed data, one has to rely on untestable assumptions about how the data were generated. In other words, there is no particular statistical method that would render a conclusion “causal”; the validity of such an interpretation depends on a combination of data, assumptions about the data-generating process based on expert judgment, and estimation techniques. In the last several decades, our understanding of causality has improved enormously, owing to a conceptual apparatus and a mathematical language that enables rigorous conceptualization of causal quantities and formal representation of causal assumptions, while still employing familiar statistical methods. Potential outcomes or the Neyman-Rubin causal model and structural equations encoded as directed acyclic graphs (DAGs, also known as structural causal models) are two common approaches for conceptualizing causal relationships. The symbiosis of both languages offers a powerful framework to address causal questions. This review covers developments in both causal identification (i.e., deciding if a quantity of interest would be recoverable from infinite data, based on our assumptions) and causal effect estimation (i.e., the use of statistical methods to approximate that answer with finite, although potentially big, data). The literature is presented following the type of assumptions and questions frequently encountered in empirical research, ending with a discussion of promising new directions in the field.


Sign in / Sign up

Export Citation Format

Share Document