scholarly journals Sensitivity analysis for unmeasured confounders using an electronic spreadsheet

2007 ◽  
Vol 41 (3) ◽  
pp. 446-452 ◽  
Author(s):  
Maria Deolinda Borges Cabral ◽  
Ronir Raggio Luiz

In studies assessing the effects of a given exposure variable and a specific outcome of interest, confusion may arise from the mistaken impression that the exposure variable is producing the outcome of interest, when in fact the observed effect is due to an existing confounder. However, quantitative techniques are rarely used to determine the potential influence of unmeasured confounders. Sensitivity analysis is a statistical technique that allows to quantitatively measuring the impact of an unmeasured confounding variable on the association of interest that is being assessed. The purpose of this study was to make it feasible to apply two sensitivity analysis methods available in the literature, developed by Rosenbaum and Greenland, using an electronic spreadsheet. Thus, it can be easier for researchers to include this quantitative tool in the set of procedures that have been commonly used in the stage of result validation.

2020 ◽  
Author(s):  
Xiang Zhang ◽  
James Stamey ◽  
Maya B Mathur

Purpose: We review statistical methods for assessing the possible impact of bias due to unmeasured confounding in real world data analysis and provide detailed recommendations for choosing among the methods. Methods: By updating an earlier systematic review, we summarize modern statistical best practices for evaluating and correcting for potential bias due to unmeasuredconfounding in estimating causal treatment effect from non-interventional studies. Results: We suggest a hierarchical structure for assessing unmeasured confounding.First, for initial sensitivity analyses, we strongly recommend applying a recently developed method, the E-value, that is straightforward to apply and does not require prior knowledge or assumptions about the unmeasured confounder(s). When some such knowledge is available, the E-value could be supplemented by the rule-out or array method at this step. If these initial analyses suggest results may not be robust to unmeasured confounding, subsequent analyses could be conducted using more specialized statistical methods, which we categorize based on whether they requireaccess to external data on the suspected unmeasured confounder(s), internal data, or no data. Other factors for choosing the subsequent sensitivity analysis methods arealso introduced and discussed, including the types of unmeasured confounders and whether the subsequent sensitivity analysis is intended to provide a corrected causaltreatment effect. Conclusion: Various analytical methods have been proposed to address unmeasured confounding, but little research has discussed a structured approach to select appropriate methods in practice. In providing practical suggestions for choosing appropriate initial and, potentially, more specialized subsequent sensitivity analyses, we hopeto facilitate the widespread reporting of such sensitivity analyses in non-interventional studies. The suggested approach also has the potential to inform pre-specificationof sensitivity analyses before executing the analysis, and therefore increase the transparency and limit selective study reporting.


2007 ◽  
Vol 136 (3) ◽  
pp. 334-340 ◽  
Author(s):  
M. D. B. CABRAL ◽  
R. R. LUIZ

SUMMARYThe objective of this study was to assess the impact of a possible unmeasured confounding variable in a previously published association between the effects of household water supply and positive results for hepatitis A serology. This was estimated using a path of integration between two methods of sensitivity analysis, called Rosenbaum's method and Greenland's external adjustment. The association between household water supply and positive results for hepatitis A (outcome) serology was insensitive to confounding unless the odds ratio for the association between the confounder and the outcome was ⩾4. The integration of the two sensitivity analysis methods presented proved useful when assessing the effects of a potential unmeasured confounder.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Pablo Martínez-Camblor ◽  
Todd A. MacKenzie ◽  
Douglas O. Staiger ◽  
Phillip P. Goodney ◽  
A. James O’Malley

AbstractProportional hazard Cox regression models are frequently used to analyze the impact of different factors on time-to-event outcomes. Most practitioners are familiar with and interpret research results in terms of hazard ratios. Direct differences in survival curves are, however, easier to understand for the general population of users and to visualize graphically. Analyzing the difference among the survival curves for the population at risk allows easy interpretation of the impact of a therapy over the follow-up. When the available information is obtained from observational studies, the observed results are potentially subject to a plethora of measured and unmeasured confounders. Although there are procedures to adjust survival curves for measured covariates, the case of unmeasured confounders has not yet been considered in the literature. In this article we provide a semi-parametric procedure for adjusting survival curves for measured and unmeasured confounders. The method augments our novel instrumental variable estimation method for survival time data in the presence of unmeasured confounding with a procedure for mapping estimates onto the survival probability and the expected survival time scales.


2021 ◽  
Author(s):  
Lateef Amusa ◽  
Temesgen Zewotir ◽  
Delia North

Abstract Unmeasured confounding can cause considerable problems in observational studies and may threaten the validity of the estimates of causal treatment effects. There has been discussion on the amount of bias in treatment effect estimates that can occur due to unmeasured confounding. We investigate the robustness of a relatively new causal inference technique, targeted maximum likelihood estimation (TMLE), in terms of its robustness to the impact of unmeasured confounders. We benchmark TMLE’s performance with the inverse probability of treatment weighting (IPW) method. We utilize a plasmode-like simulation based on variables and parameters from the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). We evaluated the accuracy and precision of the estimated treatment effects. Though TMLE performed better in most of the scenarios considered, our simulation study results suggest that both methods performed reasonably well in estimating the marginal odds ratio, in the presence of unmeasured confounding. Nonetheless, the only remedy to unobserved confounding is controlling for as many as available covariates in an observational study, because not even TMLE can provide safeguard against bias from unmeasured confounders.


2021 ◽  
pp. 014920632110064
Author(s):  
John R. Busenbark ◽  
Hyunjung (Elle) Yoon ◽  
Daniel L. Gamache ◽  
Michael C. Withers

Management research increasingly recognizes omitted variables as a primary source of endogeneity that can induce bias in empirical estimation. Methodological scholarship on the topic overwhelmingly advocates for empirical researchers to employ two-stage instrumental variable modeling, a recommendation we approach with trepidation given the challenges associated with this analytic procedure. Over the course of two studies, we leverage a statistical technique called the impact threshold of a confounding variable (ITCV) to better conceptualize what types of omitted variables might actually bias causal inference and whether they have appeared to do so in published management research. In Study 1, we apply the ITCV to published studies and find that a majority of the causal inference is unlikely biased from omitted variables. In Study 2, we respecify an influential simulation on endogeneity and determine that only the most pervasive omitted variables appear to substantively impact causal inference. Our simulation also reveals that only the strongest instruments (perhaps unrealistically strong) attenuate bias in meaningful ways. Taken together, we offer guidelines for how scholars can conceptualize omitted variables in their research, provide a practical approach that balances the tradeoffs associated with instrumental variable models, and comprehensively describe how to implement the ITCV technique.


2020 ◽  
Vol 15 (2) ◽  
pp. 152-165
Author(s):  
Harekrishna Roy ◽  
Sisir Nandi ◽  
Ungarala Pavani ◽  
Uppuluri Lakshmi ◽  
Tamma Saicharan Reddy ◽  
...  

Background: The present study deals with the formulation and optimization of piroxicam fast dissolving tablets and analyzes the impact of an independent variable while selecting the optimized formulation utilizing Quality by Design (QbD) and Box-Behnken Design (BBD). Methods: Seventeen formulations were prepared by direct compression technique by altering the proportion of cross carmellose sodium, spray dried lactose and hydro propyl methyl cellulose (HPMC K4M). The BBD statistical technique was used to optimize formulations and correlate the relationship among all the variables. Also, the powder mixture characteristics and tablet physiochemical properties such as hardness, friability, drug content, Disintegration Time (DT) and dissolution test were determined using 900 ml of 0.1N HCl (pH-1.2) at 37 ± 0.5°C. Results: Significant quadratic model and second order polynomial equations were established using BBD. To find out the relationship between variables and responses, 3D response surface and 2D contour plot was plotted. A perturbation graph was also plotted to identify the deviation of the variables from the mean point. An optimized formula was prepared based on the predicted response and the resulting responses were observed to be close with the predicted value. Conclusion: The optimized formulation with the desired parameter and formulation with variables and responses can be obtained by BBD and could be used in the large experiment with the involvement of a large number of variables and responses.


2013 ◽  
Vol 1 (2) ◽  
pp. 209-234 ◽  
Author(s):  
Pengyuan Wang ◽  
Mikhail Traskin ◽  
Dylan S. Small

AbstractThe before-and-after study with multiple unaffected control groups is widely applied to study treatment effects. The current methods usually assume that the control groups’ differences between the before and after periods, i.e. the group time effects, follow a normal distribution. However, there is usually no strong a priori evidence for the normality assumption, and there are not enough control groups to check the assumption. We propose to use a flexible skew-t distribution family to model group time effects, and consider a range of plausible skew-t distributions. Based on the skew-t distribution assumption, we propose a robust-t method to guarantee nominal significance level under a wide range of skew-t distributions, and hence make the inference robust to misspecification of the distribution of group time effects. We also propose a two-stage approach, which has lower power compared to the robust-t method, but provides an opportunity to conduct sensitivity analysis. Hence, the overall method of analysis is to use the robust-t method to test for the overall hypothesized range of shapes of group variation; if the test fails to reject, use the two-stage method to conduct a sensitivity analysis to see if there is a subset of group variation parameters for which we can be confident that there is a treatment effect. We apply the proposed methods to two datasets. One dataset is from the Current Population Survey (CPS) to study the impact of the Mariel Boatlift on Miami unemployment rates between 1979 and 1982.The other dataset contains the student enrollment and grade repeating data in West Germany in the 1960s with which we study the impact of the short school year in 1966–1967 on grade repeating rates.


Proceedings ◽  
2020 ◽  
Vol 58 (1) ◽  
pp. 31
Author(s):  
Jeremy Arancio ◽  
Ahmed Ould El Moctar ◽  
Minh Nguyen Tuan ◽  
Faradj Tayat ◽  
Jean-Philippe Roques

In the race for energy production, supplier companies are concerned by the thermal rating of offshore cables installed in a J-tube, not covered by IEC 60287 standards, and are now looking for solutions to optimize this type of system. This paper presents a numerical model capable of calculating temperature fields of a power transmission cable installed in a J-tube, based on the lumped element method. This model is validated against the existing literature. A sensitivity analysis performed using Sobol indices is then presented in order to understand the impact of the different parameters involved in the heating of the cable. This analysis provides an understanding of the thermal phenomena in the J-tube and paves the way for potential technical and economic solutions to increase the ampacity of offshore cables installed in a J-tube.


Sign in / Sign up

Export Citation Format

Share Document