scholarly journals Generalized spatiotemporal modeling and causal inference for assessing treatment effects for multiple groups for ordinal outcome.

2018 ◽  
Author(s):  
Soutik Ghosal
2020 ◽  
Vol 9 (10) ◽  
pp. 737-750
Author(s):  
Elyse Swallow ◽  
Oscar Patterson-Lomba ◽  
Rajeev Ayyagari ◽  
Corey Pelletier ◽  
Rina Mehta ◽  
...  

Aim: To illustrate that bias associated with indirect treatment comparison and network meta-analyses can be reduced by adjusting for outcomes on common reference arms. Materials & methods: Approaches to adjusting for reference-arm effects are presented within a causal inference framework. Bayesian and Frequentist approaches are applied to three real data examples. Results: Reference-arm adjustment can significantly impact estimated treatment differences, improve model fit and align indirectly estimated treatment effects with those observed in randomized trials. Reference-arm adjustment can possibly reverse the direction of estimated treatment effects. Conclusion: Accumulating theoretical and empirical evidence underscores the importance of adjusting for reference-arm outcomes in indirect treatment comparison and network meta-analyses to make full use of data and reduce the risk of bias in estimated treatments effects.


2013 ◽  
Vol 53 (8) ◽  
pp. 643 ◽  
Author(s):  
R. Murison ◽  
J. M. Scott

The present paper explains the statistical inference that can be drawn from an unreplicated field experiment that investigated three different pasture and grazing management strategies. The experiment was intended to assess these three strategies as whole farmlet systems where scale of the experiment precluded replication. The experiment was planned so that farmlets were allocated to matched paddocks on the basis of background variables that were measured across each paddock before the start of the experiment. These conditioning variables were used in the statistical model so that farmlet effects could be discerned from the longitudinal profiles of the responses. The purpose is to explain the principles by which longitudinal data collected from the experiment were interpreted. Two datasets, including (1) botanical composition and (2) hogget liveweights, are used in the present paper as examples. Inferences from the experiment are guarded because we acknowledge that the use of conditioning variables and matched paddocks does not provide the same power as replication. We, nevertheless, conclude that the differences observed are more likely to have been due to treatment effects than to random variation or bias.


2013 ◽  
Vol 21 (2) ◽  
pp. 233-251 ◽  
Author(s):  
Walter R. Mebane ◽  
Paul Poast

How a treatment causes a particular outcome is a focus of inquiry in political science. When treatment data are either nonrandomly assigned or missing, the analyst will often invoke ignorability assumptions: that is, both the treatment and missingness are assumed to be as if randomly assigned, perhaps conditional on a set of observed covariates. But what if these assumptions are wrong? What if the analyst does not know why—or even if—a particular subject received a treatment? Building on Manski, Molinari offers an approach for calculating nonparametric identification bounds for the average treatment effect of a binary treatment under general missingness or nonrandom assignment. To make these bounds substantively more informative, Molinari's technique permits adding monotonicity assumptions (e.g., assuming that treatment effects are weakly positive). Given the potential importance of these assumptions, we develop a new Bayesian method for performing sensitivity analysis regarding them. This sensitivity analysis allows analysts to interpret the assumptions' consequences quantitatively and visually. We apply this method to two problems in political science, highlighting the method's utility for applied research.


2009 ◽  
Vol 31 (4) ◽  
pp. 463-479 ◽  
Author(s):  
Steffi Pohl ◽  
Peter M. Steiner ◽  
Jens Eisermann ◽  
Renate Soellner ◽  
Thomas D. Cook

Adjustment methods such as propensity scores and analysis of covariance are often used for estimating treatment effects in nonexperimental data. Shadish, Clark, and Steiner used a within-study comparison to test how well these adjustments work in practice. They randomly assigned participating students to a randomized or nonrandomized experiment. Treatment effects were then estimated in the experiment and compared to the adjusted nonexperimental estimates. Most of the selection bias in the nonexperiment was reduced. The present study replicates the study of Shadish et al. despite some differences in design and in the size and direction of the initial bias. The results show that the selection of covariates matters considerably for bias reduction in nonexperiments but that the choice of analysis matters less.


2021 ◽  
Author(s):  
Shuo Sun ◽  
Erica E. M. Moodie ◽  
Johanna G. Nešlehová

2021 ◽  
Author(s):  
Linda Graefe ◽  
Sonja Hahn ◽  
Axel Mayer

In unbalanced designs, there is a controversy about which ANOVA type of sums of squares should be used for testing main effects and whether main effects should be considered at all in the presence of interactions. Looking at this problem from a causal inference perspective, we show in which designs and under which conditions the ANOVA main effects correspond to average treatment effects as defined in the causal inference literature. We consider balanced, proportional and nonorthogonal designs, and models with and without interactions. In balanced designs, main effects obtained by type I, II, and III sums of squares all correspond to the average treatment effect. This is also true for proportional designs except for ANOVA type III which leads to bias if there are interactions. In nonorthogonal designs, ANOVA type I is always highly biased and ANOVA type II and III are biased if there are interactions. In a simulation study, we confirm our theoretical results and examine the severity of bias under different conditions.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Chinmay Belthangady ◽  
Will Stedden ◽  
Beau Norgeot

Abstract Background Observational studies are increasingly being used to provide supplementary evidence in addition to Randomized Control Trials (RCTs) because they provide a scale and diversity of participants and outcomes that would be infeasible in an RCT. Additionally, they more closely reflect the settings in which the studied interventions will be applied in the future. Well-established propensity-score-based methods exist to overcome the challenges of working with observational data to estimate causal effects. These methods also provide quality assurance diagnostics to evaluate the degree to which bias has been removed and the estimates can be trusted. In large medical datasets it is common to find the same underlying health condition being treated with a variety of distinct drugs or drug combinations. Conventional methods require a manual iterative workflow, making them scale poorly to studies with many intervention arms. In such situations, automated causal inference methods that are compatible with traditional propensity-score-based workflows are highly desirable. Methods We introduce an automated causal inference method BCAUS, that features a deep-neural-network-based propensity model that is trained with a loss which penalizes both the incorrect prediction of the assigned treatment as well as the degree of imbalance between the inverse probability weighted covariates. The network is trained end-to-end by dynamically adjusting the loss term for each training batch such that the relative contributions from the two loss components are held fixed. Trained BCAUS models can be used in conjunction with traditional propensity-score-based methods to estimate causal treatment effects. Results We tested BCAUS on the semi-synthetic Infant Health & Development Program dataset with a single intervention arm, and a real-world observational study of diabetes interventions with over 100,000 individuals spread across more than a hundred intervention arms. When compared against other recently proposed automated causal inference methods, BCAUS had competitive accuracy for estimating synthetic treatment effects and provided highly concordant estimates on the real-world dataset but was an order-of-magnitude faster. Conclusions BCAUS is directly compatible with trusted protocols to estimate treatment effects and diagnose the quality of those estimates, while making the established approaches automatically scalable to an arbitrary number of simultaneous intervention arms without any need for manual iteration.


Sign in / Sign up

Export Citation Format

Share Document