scholarly journals Assessing The Effectiveness of Empirical Calibration Under Different Bias Scenarios

Author(s):  
Hon Hwang ◽  
Juan C Quiroz ◽  
Blanca Gallego

Abstract Background: Estimations of causal effects from observational data are subject to various sources of bias. These biases can be adjusted by using negative control outcomes not affected by the treatment. The empirical calibration procedure uses negative controls to calibrate p-values and both negative and positive controls to calibrate coverage of the 95% confidence interval of the outcome of interest. Although empirical calibration has been used in several large observational studies, there is no systematic examination of its effect under different bias scenarios. Methods: The effect of empirical calibration of confidence intervals was analyzed using simulated datasets with known treatment effects. The simulations were for binary treatment and binary outcome, with simulated biases resulting from unmeasured confounder, model misspecification, measurement error, and lack of positivity. The performance of empirical calibration was evaluated by determining the change of the confidence interval coverage and bias of the outcome of interest. Results: Empirical calibration increased coverage of the outcome of interest by the 95% confidence interval under most settings but was inconsistent in adjusting the bias of the outcome of interest. Empirical calibration was most effective when adjusting for unmeasured confounding bias. Suitable negative controls had a large impact on the adjustment made by empirical calibration, but small improvements in the coverage of the outcome of interest was also observable when using unsuitable negative controls. Conclusions: This work adds evidence to the efficacy of empirical calibration on calibrating the confidence intervals of treatment effects in observational studies. We recommend empirical calibration of confidence intervals, especially when there is a risk of unmeasured confounding.

Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


2020 ◽  
Author(s):  
Youmi Suk ◽  
Hyunseung Kang

Recently, machine learning (ML) methods have been used in causal inference to estimate treatment effects in order to reduce concerns for model mis-specification. However, many, if not all, ML methods require that all confounders are measured to consistently estimate treatment effects. In this paper, we propose a family of ML methods that estimate treatment effects in the presence of cluster-level unmeasured confounders, a type of unmeasured confounders that are shared within each cluster and are common in multilevel observational studies. We show through simulation studies that our proposed methods are consistent and doubly robust when unmeasured cluster-level confounders are present. We also examine the effect of taking an algebra course on math achievement scores from the Early Childhood Longitudinal Study, a multilevel observational educational study, using our methods. The proposed methods are available in the CURobustML R package.


2019 ◽  
Vol 48 (6) ◽  
pp. 674-675
Author(s):  
Lekshmi Rita Venugopal ◽  
Tom Varghese M

Negative control exposure analysis is a very effective tool in evaluating the effect of unmeasured confounding in observational epidemiological studies. Several biases, including recall bias, time-varying confounding factors, measurement bias and so on, can affect the credibility of negative control exposure analysis for causal interpretations. The article focuses on the implications of differential measurement error across exposed group and negative controls to causal interpretations on negative control exposure analysis.


2020 ◽  
pp. 096228022097183
Author(s):  
Tao Liu ◽  
Joseph W Hogan

Confounding is a major concern when using data from observational studies to infer the causal effect of a treatment. Instrumental variables, when available, have been used to construct bound estimates on population average treatment effects when outcomes are binary and unmeasured confounding exists. With continuous outcomes, meaningful bounds are more challenging to obtain because the domain of the outcome is unrestricted. In this paper, we propose to unify the instrumental variable and inverse probability weighting methods, together with suitable assumptions in the context of an observational study, to construct meaningful bounds on causal treatment effects. The contextual assumptions are imposed in terms of the potential outcomes that are partially identified by data. The inverse probability weighting component incorporates a sensitivity parameter to encode the effect of unmeasured confounding. The instrumental variable and inverse probability weighting methods are unified using the principal stratification. By solving the resulting system of estimating equations, we are able to quantify both the causal treatment effect and the sensitivity parameter (i.e. the degree of the unmeasured confounding). We demonstrate our method by analyzing data from the HIV Epidemiology Research Study.


Marketing ZFP ◽  
2019 ◽  
Vol 41 (4) ◽  
pp. 33-42
Author(s):  
Thomas Otter

Empirical research in marketing often is, at least in parts, exploratory. The goal of exploratory research, by definition, extends beyond the empirical calibration of parameters in well established models and includes the empirical assessment of different model specifications. In this context researchers often rely on the statistical information about parameters in a given model to learn about likely model structures. An example is the search for the 'true' set of covariates in a regression model based on confidence intervals of regression coefficients. The purpose of this paper is to illustrate and compare different measures of statistical information about model parameters in the context of a generalized linear model: classical confidence intervals, bootstrapped confidence intervals, and Bayesian posterior credible intervals from a model that adapts its dimensionality as a function of the information in the data. I find that inference from the adaptive Bayesian model dominates that based on classical and bootstrapped intervals in a given model.


Genetics ◽  
1998 ◽  
Vol 148 (1) ◽  
pp. 525-535
Author(s):  
Claude M Lebreton ◽  
Peter M Visscher

AbstractSeveral nonparametric bootstrap methods are tested to obtain better confidence intervals for the quantitative trait loci (QTL) positions, i.e., with minimal width and unbiased coverage probability. Two selective resampling schemes are proposed as a means of conditioning the bootstrap on the number of genetic factors in our model inferred from the original data. The selection is based on criteria related to the estimated number of genetic factors, and only the retained bootstrapped samples will contribute a value to the empirically estimated distribution of the QTL position estimate. These schemes are compared with a nonselective scheme across a range of simple configurations of one QTL on a one-chromosome genome. In particular, the effect of the chromosome length and the relative position of the QTL are examined for a given experimental power, which determines the confidence interval size. With the test protocol used, it appears that the selective resampling schemes are either unbiased or least biased when the QTL is situated near the middle of the chromosome. When the QTL is closer to one end, the likelihood curve of its position along the chromosome becomes truncated, and the nonselective scheme then performs better inasmuch as the percentage of estimated confidence intervals that actually contain the real QTL's position is closer to expectation. The nonselective method, however, produces larger confidence intervals. Hence, we advocate use of the selective methods, regardless of the QTL position along the chromosome (to reduce confidence interval sizes), but we leave the problem open as to how the method should be altered to take into account the bias of the original estimate of the QTL's position.


2013 ◽  
Vol 18 (1) ◽  
pp. 86-93
Author(s):  
Gustavo Antônio Martins Brandão ◽  
Rafael Menezes Simas ◽  
Leandro Moreira de Almeida ◽  
Juliana Melo da Silva ◽  
Marcelo de Castro Meneghim ◽  
...  

OBJECTIVE: To evaluate the in vitro ionic degradation and slot base corrosion of metallic brackets subjected to brushing with dentifrices, through analysis of chemical composition by Energy Dispersive Spectroscopy (EDS) and qualitative analysis by Scanning Electron Microscopy (SEM). METHODS: Thirty eight brackets were selected and randomly divided into four experimental groups (n = 7). Two groups (n = 5) worked as positive and negative controls. Simulated orthodontic braces were assembled using 0.019 x 0.025-in stainless steel wires and elastomeric rings. The groups were divided according to surface treatment: G1 (Máxima Proteção Anticáries®); G2 (Total 12®); G3 (Sensitive®); G4 (Branqueador®); Positive control (artificial saliva) and Negative control (no treatment). Twenty eight brushing cycles were performed and evaluations were made before (T0) and after (T1) experiment. RESULTS: The Wilcoxon test showed no difference in ionic concentrations of titanium (Ti), chromium (Cr), iron (Fe) and nickel (Ni) between groups. G2 presented significant reduction (p < 0.05) in the concentration of aluminium ion (Al). Groups G3 and G4 presented significant increase (p < 0.05) in the concentration of aluminium ion. The SEM analysis showed increased characteristics indicative of corrosion on groups G2, G3 and G4. CONCLUSION: The EDS analysis revealed that control groups and G1 did not suffer alterations on the chemical composition. G2 presented degradation in the amount of Al ion. G3 and G4 suffered increase in the concentration of Al. The immersion in artificial saliva and the dentifrice Máxima Proteção Anticáries® did not alter the surface polishing. The dentifrices Total 12®, Sensitive® and Branqueador® altered the surface polishing.


2005 ◽  
Vol 127 (4) ◽  
pp. 280-284 ◽  
Author(s):  
Noah D. Manring

The objective of this paper is to analyze the uncertainty associated with pump efficiency measurements and to determine reasonable confidence intervals for these data. In the past, many industrial sales and some pieces of academic research have been based upon the experimental data of pump efficiencies; yet few have questioned the accuracy of the experimental data and no one has provided a confidence interval which reflects the range of uncertainty in the measurement. In this paper, a method for calculating this confidence interval is presented and it is shown that substantially large confidence intervals exist within the testing results of a pump. Furthermore, it is recommended that these confidence intervals be included with the efficiency data whenever it is reported.


Sign in / Sign up

Export Citation Format

Share Document