A conditional test with demonstrated insensitivity to unmeasured bias in matched observational studies

Biometrika ◽  
2020 ◽  
Vol 107 (4) ◽  
pp. 827-840
Author(s):  
P R Rosenbaum

Summary In an observational study matched for observed covariates, an association between treatment received and outcome exhibited may indicate not an effect caused by the treatment, but merely some bias in the allocation of treatments to individuals within matched pairs. The evidence that distinguishes moderate biases from causal effects is unevenly dispersed among possible comparisons in an observational study: some comparisons are insensitive to larger biases than others. Intuitively, larger treatment effects tend to be insensitive to larger unmeasured biases, and perhaps matched pairs can be grouped using covariates, doses or response patterns so that groups of pairs with larger treatment effects may be identified. Even if an investigator has a reasoned conjecture about where to look for insensitive comparisons, that conjecture might prove mistaken, or, when not mistaken, it might be received sceptically by other scientists who doubt the conjecture or judge it to be too convenient in light of its success with the data at hand. In this article a test is proposed that searches for insensitive findings over many comparisons, but controls the probability of falsely rejecting a true null hypothesis of no treatment effect in the presence of a bias of specified magnitude. An example is studied in which the test considers many comparisons and locates an interpretable comparison that is insensitive to larger biases than a conventional comparison based on Wilcoxon’s signed rank statistic applied to all pairs. A simulation examines the power of the proposed test. The method is implemented in the R package dstat, which contains the example and reproduces the analysis.

Biostatistics ◽  
2018 ◽  
Vol 21 (3) ◽  
pp. 384-399 ◽  
Author(s):  
Paul R Rosenbaum

Summary In observational studies of treatment effects, it is common to have several outcomes, perhaps of uncertain quality and relevance, each purporting to measure the effect of the treatment. A single planned combination of several outcomes may increase both power and insensitivity to unmeasured bias when the plan is wisely chosen, but it may miss opportunities in other cases. A method is proposed that uses one planned combination with only a mild correction for multiple testing and exhaustive consideration of all possible combinations fully correcting for multiple testing. The method works with the joint distribution of $\kappa^{T}\left( \mathbf{T}-\boldsymbol{\mu}\right) /\sqrt {\boldsymbol{\kappa}^{T}\boldsymbol{\Sigma\boldsymbol{\kappa}}}$ and $max_{\boldsymbol{\lambda}\neq\mathbf{0}}$$\,\lambda^{T}\left( \mathbf{T} -\boldsymbol{\mu}\right) /$$\sqrt{\boldsymbol{\lambda}^{T}\boldsymbol{\Sigma \lambda}}$ where $\kappa$ is chosen a priori and the test statistic $\mathbf{T}$ is asymptotically $N_{L}\left( \boldsymbol{\mu},\boldsymbol{\Sigma}\right) $. The correction for multiple testing has a smaller effect on the power of $\kappa^{T}\left( \mathbf{T}-\boldsymbol{\mu }\right) /\sqrt{\boldsymbol{\kappa}^{T}\boldsymbol{\Sigma\boldsymbol{\kappa} }}$ than does switching to a two-tailed test, even though the opposite tail does receive consideration when $\lambda=-\kappa$. In the application, there are three measures of cognitive decline, and the a priori comparison $\kappa$ is their first principal component, computed without reference to treatment assignments. The method is implemented in an R package sensitivitymult.


2014 ◽  
Vol 13 (5) ◽  
pp. 281-285 ◽  
Author(s):  
Heng Li ◽  
Terri Johnson

2020 ◽  
Author(s):  
Youmi Suk ◽  
Hyunseung Kang

Recently, machine learning (ML) methods have been used in causal inference to estimate treatment effects in order to reduce concerns for model mis-specification. However, many, if not all, ML methods require that all confounders are measured to consistently estimate treatment effects. In this paper, we propose a family of ML methods that estimate treatment effects in the presence of cluster-level unmeasured confounders, a type of unmeasured confounders that are shared within each cluster and are common in multilevel observational studies. We show through simulation studies that our proposed methods are consistent and doubly robust when unmeasured cluster-level confounders are present. We also examine the effect of taking an algebra course on math achievement scores from the Early Childhood Longitudinal Study, a multilevel observational educational study, using our methods. The proposed methods are available in the CURobustML R package.


2020 ◽  
Vol 182 (5) ◽  
pp. E5-E7
Author(s):  
Rolf H H Groenwold ◽  
Olaf M Dekkers

The results of observational studies of causal effects are potentially biased due to confounding. Various methods have been proposed to control for confounding in observational studies. Eight basic aspects of confounding adjustment are described, with a focus on correction for confounding through covariate adjustment using regression analysis. These aspects should be considered when planning an observational study of causal effects or when assessing the validity of the results of such a study.


1974 ◽  
Vol 5 (2) ◽  
pp. 87-97
Author(s):  
George W. Bright ◽  
L. Ray Carry

Preservice secondary school mathematics teachers were studied relative to 2 hypotheses: (a) Mathematicians and educators influence decisions in projected classroom situations; (b) Mathematicians exert more influence than educators. Ss were presented with projected classroom situations, each accompanied by 3 plausible resolutions. For each situation, labels of “mathematicians” and “educators” were attached to different resolutions to represent the alleged consensus of the respective professional group. The sample was split randomly into control (N = 33) and experimental (N = 28) groups. Randomization was used in producing the test instrument to control content effects. Kolmogorov-Smirnov statistics indicated significant influence of the label “mathematicians” (p < .03) and the label “educators” (p < .01) on the resolution choices of experimental Ss. A Wilcoxon matched-pairs, signed-rank statistic indicated no significant differential influence between the professional groups.


Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


1994 ◽  
Vol 19 (3) ◽  
pp. 217-236 ◽  
Author(s):  
Paul W. Mielke ◽  
Kenneth J. Berry

In completely randomized experimental designs where population variances are equal under the null hypothesis, it is not uncommon to have multiplicative treatment effects that produce unequal variances under the alternative hypothesis. Permutation procedures are presented to test for (a) median location and scale shifts, (b) scale shifts only, and (c) mean location shifts only. Corresponding multivariate extensions are provided. Location-shift power comparisons between the parametric Bartlett-Nanda-Pillai trace test and three alternative multivariate permutation tests for five bivariate distributions are included.


2021 ◽  
Vol 9 (1) ◽  
pp. 190-210
Author(s):  
Arvid Sjölander ◽  
Ola Hössjer

Abstract Unmeasured confounding is an important threat to the validity of observational studies. A common way to deal with unmeasured confounding is to compute bounds for the causal effect of interest, that is, a range of values that is guaranteed to include the true effect, given the observed data. Recently, bounds have been proposed that are based on sensitivity parameters, which quantify the degree of unmeasured confounding on the risk ratio scale. These bounds can be used to compute an E-value, that is, the degree of confounding required to explain away an observed association, on the risk ratio scale. We complement and extend this previous work by deriving analogous bounds, based on sensitivity parameters on the risk difference scale. We show that our bounds can also be used to compute an E-value, on the risk difference scale. We compare our novel bounds with previous bounds through a real data example and a simulation study.


Sign in / Sign up

Export Citation Format

Share Document