Causal Inference and Mediation: Cross-sectional Biases of Longitudinal Processes

2006 ◽  
Author(s):  
Jonathan N. Frecceri ◽  
Scott E. Maxwell
Author(s):  
Deena Costa ◽  
Olga Yakusheva

Since the early 1990s researchers have steadily built a broad evidence base for the association between nurse staffing and patient outcomes. However, the majority of the studies in the literature employ designs that are unable to robustly examine causal pathways to meaningful improvement in patient outcomes. A focus on causal inference is essential to moving the field of nursing research forward, and as part of the essential skill-set for all nurses as consumers of research. In this article, we aim to describe the importance of causal inference in nursing research and discuss study designs that are more likely to produce causal findings. We first review the conceptual framework supporting this discussion and then use selected examples from the literature, typifying three key study designs – cross-sectional, longitudinal, and randomized control trials (RCTs). The discussion will illustrate strengths and limitation of existing evidence, focusing on the causal pathway between nurse staffing and outcomes. The article conclusion considers implications for future research.


2018 ◽  
Vol 37 (75) ◽  
pp. 779-808 ◽  
Author(s):  
Alex Coad ◽  
Dominik Janzing ◽  
Paul Nightingale

This paper presents a new statistical toolkit by applying three techniques for data-driven causal inference from the machine learning community that are little-known among economists and innovation scholars: a conditional independence-based approach, additive noise models, and non-algorithmic inference by hand. We include three applications to CIS data to investigate public funding schemes for R&D investment, information sources for innovation, and innovation expenditures and firm growth. Preliminary results provide causal interpretations of some previously-observed correlations. Our statistical 'toolkit' could be a useful complement to existing techniques.


2006 ◽  
Vol 14 ◽  
pp. 17
Author(s):  
Harold Wenglinsky

The purpose of this article is to comment on the prior article entitled "Examining Instruction, Achievement and Equity with NAEP mathematics data," by Sarah Theule Lubienski. That article claims that a prior article by the author suffered from three weaknesses: (1) An attempt to justify No Child Left Behind (NCLB); (2) drawing causal inferences from cross-sectional data; (3) and various statistical quibbles. The author responds to the first claim, by indicating that any mention of NCLB was intended purely to make the article relevant to a policy journal; to the second claim, by noting his own reservations about using cross-sectional data to draw causal inferences; and to the third claim by noting potential issues of quantitative methodology in the Lubienski article. He concludes that studies that use advanced statistical methods are often so opaque as to be difficult to compare, and suggests some advantages to the quantitative transparency that comes from the findings of randomly controlled field trials.


Sign in / Sign up

Export Citation Format

Share Document