Deliberation and Experimental Design

Author(s):  
Kevin Esterling

This chapter describes the methodological considerations necessary for making a causal inference regarding the effect of institutions and group contexts on deliberation. This chapter focuses on the elements of the research design of a study and the assumptions that are necessary to state a causal inference given a particular design; these considerations are applicable to randomized experimental designs, both in the lab and in the field, as well as to quasi-experimental or natural experimental designs using observational data. The chapter shows how to assess the internal validity of a study for identifying a causal effect for a given study and briefly discusses external and epistemic validity considerations that are of particular urgency for empirical deliberation.

2012 ◽  
Vol 3 (1) ◽  
pp. 52
Author(s):  
Donald T. Campbell ◽  
Beatrice J. Krauss

This paper provides a speculative discussion on what quasi-experimental designs might be useful in various aspects of HIV/AIDS research. The first author’s expertise is in research design, not HIV, while the second author has been active in HIV prevention research. It is hoped that it may help the HIV/AIDS research community in discovering and inventing an expanded range of possibilities for valid causal inference. DOI:10.2458/azu_jmmss_v3i1_campbell


In this chapter, students will learn the process of designing experiments. The classic experimental design is presented first. Following this, three distinct quasi-experimental designs are presented. The benefits and burdens of the classic and quasi-experimental designs are discussed in depth. By the end of this chapter, students will understand concepts related to random selection, generalizability, treatment and control groups, pre- and post-test measurement of the dependent variable, and internal validity.


1983 ◽  
Vol 4 (3) ◽  
pp. 77-83
Author(s):  
Jean R. Harber

This article stresses the importance of controlling extraneous variables when studying educational problems. Various types of research studies are described. The experimental research design, which is ideally suited to detecting causal relationships if proper controls are used, and quasi-experimental procedures, which are employed when true experimental designs cannot be used, are discussed. Threats to internal validity are presented and hypothetical examples are given to illustrate these threats and the means of controlling them. The importance of utilizing control groups is illustrated.


2012 ◽  
Vol 3 (1) ◽  
pp. 52
Author(s):  
Donald T. Campbell ◽  
Beatrice J. Krauss

This paper provides a speculative discussion on what quasi-experimental designs might be useful in various aspects of HIV/AIDS research. The first author’s expertise is in research design, not HIV, while the second author has been active in HIV prevention research. It is hoped that it may help the HIV/AIDS research community in discovering and inventing an expanded range of possibilities for valid causal inference. DOI:10.2458/azu_jmmss_v3i1_campbell


2021 ◽  
Vol 15 (5) ◽  
pp. 1-46
Author(s):  
Liuyi Yao ◽  
Zhixuan Chu ◽  
Sheng Li ◽  
Yaliang Li ◽  
Jing Gao ◽  
...  

Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy, and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well-known causal inference frameworks. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine, and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods.


2019 ◽  
Vol 188 (9) ◽  
pp. 1682-1685 ◽  
Author(s):  
Hailey R Banack

Abstract Authors aiming to estimate causal effects from observational data frequently discuss 3 fundamental identifiability assumptions for causal inference: exchangeability, consistency, and positivity. However, too often, studies fail to acknowledge the importance of measurement bias in causal inference. In the presence of measurement bias, the aforementioned identifiability conditions are not sufficient to estimate a causal effect. The most fundamental requirement for estimating a causal effect is knowing who is truly exposed and unexposed. In this issue of the Journal, Caniglia et al. (Am J Epidemiol. 2019;000(00):000–000) present a thorough discussion of methodological challenges when estimating causal effects in the context of research on distance to obstetrical care. Their article highlights empirical strategies for examining nonexchangeability due to unmeasured confounding and selection bias and potential violations of the consistency assumption. In addition to the important considerations outlined by Caniglia et al., authors interested in estimating causal effects from observational data should also consider implementing quantitative strategies to examine the impact of misclassification. The objective of this commentary is to emphasize that you can’t drive a car with only three wheels, and you also cannot estimate a causal effect in the presence of exposure misclassification bias.


2015 ◽  
Vol 29 (119) ◽  
pp. 19 ◽  
Author(s):  
Brittany Gorrall ◽  
Jacob Curtis ◽  
Todd Little ◽  
Pavel Panko

<p><span>Los diseños de Pruebas Controladas Aleatorizadas (PCA) son típicamente vistas como el mejor diseño en la investigación en psicología. Como tal, no es siempre posible cumplir con las especificaciones de las PCA y por ello muchos estudios son realizados en un marco cuasi experimental. Aunque los diseños cuasi experimentales son considerados menos convenientes que los diseños PCA, con directrices estos pueden producir inferencias igualmente válidas. En este artículo presentamos tres diseños cuasi experimentales que son formas alternativas a los diseños PCA. Estos diseños son Regresión de Punto de Desplazamiento (RPD), Regresión Discontinua (RD), Pareamiento por Puntaje de Propensión (PPP). Adicionalmente, describimos varias mejorías metodológicas para usar con este tipo de diseños. </span></p>


2021 ◽  
Author(s):  
Eric W. Bridgeford ◽  
Michael Powell ◽  
Gregory Kiar ◽  
Ross Lawrence ◽  
Brian Caffo ◽  
...  

AbstractBatch effects, undesirable sources of variance across multiple experiments, present a substantial hurdle for scientific and clinical discoveries. Specifically, the presence of batch effects can create both spurious discoveries and hide veridical signals, contributing to the ongoing reproducibility crisis. Typical approaches to dealing with batch effects conceptualize ‘batches’ as an associational effect, rather than a causal effect, despite the fact that the sources of variance that comprise the batch – potentially including experimental design and population demographics – causally impact downstream inferences. We therefore cast batch effects as a causal problem rather than an associational problem. This reformulation enables us to make explicit the assumptions and limitations of existing approaches for dealing with batch effects. We therefore develop causal batch effect strategies—CausalDcorr for discovery of batch effects and CausalComBat for mitigating batch effects – which build upon existing statistical associational methods by incorporating modern causal inference techniques. We apply these strategies to a large mega-study of human connectomes assembled by the Consortium for Reliability and Reproducibility, consisting of 24 batches including over 1700 individuals to illustrate that existing approaches create more spurious discoveries (false positives) and miss more veridical signals (true positives) than our proposed approaches. Our work therefore introduces a conceptual framing, as well as open source code, for combining multiple distinct datasets to increase confidence in claims of scientific and clinical discoveries.


Sign in / Sign up

Export Citation Format

Share Document