Sensitivity Analysis for Pretreatment Confounding With Multiple Mediators

2020 ◽  
pp. 107699862093450
Author(s):  
Soojin Park ◽  
Kevin M. Esterling

The causal mediation literature has developed techniques to assess the sensitivity of an inference to pretreatment confounding, but these techniques are limited to the case of a single mediator. In this article, we extend sensitivity analysis to possible violations of pretreatment confounding in the case of multiple mediators. In particular, we develop sensitivity analyses under three alternative approaches to effect decomposition: (1) jointly considered mediators, (2) identifiable direct and indirect paths, and (3) interventional analogues effects. With reasonable assumptions, each approach reduces to a single procedure to assess sensitivity in the presence of simultaneous pre- and posttreatment confounding. We demonstrate our sensitivity analysis techniques with a framing experiment that examines whether anxiety mediates respondents’ attitudes toward immigration in response to an information prompt.

2009 ◽  
Vol 11 (3-4) ◽  
pp. 282-296 ◽  
Author(s):  
Srikanta Mishra

Formal uncertainty and sensitivity analysis techniques enable hydrologic modelers to quantify the range of likely outcomes, likelihood of each outcome and an assessment of key contributors to output uncertainty. Such information is an improvement over standard deterministic point estimates for making engineering decisions under uncertainty. This paper provides an overview of various uncertainty analysis techniques that permit mapping model input uncertainty into uncertainty in model predictions. These include Monte Carlo simulation, first-order second-moment analysis, point estimate method, logic tree analysis and first-order reliability method. Also presented is an overview of sensitivity analysis techniques that permit identification of those parameters that control the uncertainty in model predictions. These include stepwise regression, mutual information (entropy) analysis and classification tree analysis. Two case studies are presented to demonstrate the practical applicability of these techniques. The paper also discusses a systematic framework for carrying out uncertainty and sensitivity analyses.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii105-ii105
Author(s):  
Alexander Hulsbergen ◽  
Asad Lak ◽  
Yu Tung Lo ◽  
Nayan Lamba ◽  
Steven Nagtegaal ◽  
...  

Abstract INTRODUCTION In several cancers treated with immune checkpoint inhibitors (ICIs), a remarkable association between the occurrence of immune-related adverse events (irAEs) and superior oncological outcomes has been reported. This effect has hitherto not been reported in the brain. This study aimed to investigate the relation between irAEs and outcomes in brain metastases (BM) patients treated with both local treatment to the brain (LT; i.e. surgery and/or radiation) and ICIs. METHODS This study is a retrospective cohort analysis of patients treated for non-small cell lung cancer (NSCLC) BMs in a tertiary institution in Boston, MA. Outcomes of interest were overall survival (OS) and intracranial progression-free survival (IC-PFS), measured from the time of LT. Sensitivity analyses were performed to account for immortal time bias (i.e., patients who live longer receive more cycles of ICIs and thus have more opportunity to develop an irAE). RESULTS A total of 184 patients were included; 62 (33.7%) were treated with neurosurgical resection and 122 (66.3%) with upfront brain radiation. irAEs occurred in 62 patients (33.7%). After adjusting for lung-Graded Prognostic Assessment, type of LT, type of ICI, newly diagnosed vs. recurrent BM, BM size and number, targetable mutations, and smoking status, irAEs were strongly associated with better OS (HR 0.33, 95% CI 0.19 – 0.58, p < 0.0001) and IC-PFS (HR 0.41; 95% CI 0.26 – 0.65; p = 0.0001). Landmark analysis including only patients who received more than 3 cycles of ICI (n = 133) demonstrated similar results for OS and IC-PFS, as did sensitivity analysis adjusting for the number of cycles administered (HR range 0.36 – 0.51, all p-values < 0.02). CONCLUSIONS After adjusting for known prognostic factors, irAEs strongly predict superior outcomes after LT in NSCLC BM patients. Sensitivity analysis suggests that this is unlikely due to immortal time bias.


Author(s):  
Amin Hosseini ◽  
Touraj Taghikhany ◽  
Milad Jahangiri

In the past few years, many studies have proved the efficiency of Simple Adaptive Control (SAC) in mitigating earthquakes’ damages to building structures. Nevertheless, the weighting matrices of this controller should be selected after a large number of sensitivity analyses. This step is time-consuming and it will not necessarily yield a controller with optimum performance. In the current study, an innovative method is introduced to tuning the SAC’s weighting matrices, which dispenses with excessive sensitivity analysis. In this regard, we try to define an optimization problem using intelligent evolutionary algorithm and utilized control indices in an objective function. The efficiency of the introduced method is investigated in 6-story building structure equipped with magnetorheological dampers under different seismic actions with and without uncertainty in the model of the proposed structure. The results indicate that the controller designed by the introduced method has a desirable performance under different conditions of uncertainty in the model. Furthermore, it improves the seismic performance of structure as compared to controllers designed through sensitivity analysis.


Author(s):  
Marco Doretti ◽  
Martina Raggi ◽  
Elena Stanghellini

AbstractWith reference to causal mediation analysis, a parametric expression for natural direct and indirect effects is derived for the setting of a binary outcome with a binary mediator, both modelled via a logistic regression. The proposed effect decomposition operates on the odds ratio scale and does not require the outcome to be rare. It generalizes the existing ones, allowing for interactions between both the exposure and the mediator and the confounding covariates. The derived parametric formulae are flexible, in that they readily adapt to the two different natural effect decompositions defined in the mediation literature. In parallel with results derived under the rare outcome assumption, they also outline the relationship between the causal effects and the correspondent pathway-specific logistic regression parameters, isolating the controlled direct effect in the natural direct effect expressions. Formulae for standard errors, obtained via the delta method, are also given. An empirical application to data coming from a microfinance experiment performed in Bosnia and Herzegovina is illustrated.


2017 ◽  
Vol 10 (12) ◽  
pp. 4511-4523 ◽  
Author(s):  
Tarandeep S. Kalra ◽  
Alfredo Aretxabaleta ◽  
Pranay Seshadri ◽  
Neil K. Ganju ◽  
Alexis Beudin

Abstract. Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean–Atmosphere–Wave–Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.


2018 ◽  
Vol 34 (6) ◽  
pp. 576-583 ◽  
Author(s):  
Saeed Taheri ◽  
Elham Heidari ◽  
Mohammad Ali Aivazi ◽  
Mehran Shams-Beyranvand ◽  
Mehdi Varmaghani

Objectives:This study aimed to assess the cost-effectiveness of ivabradine plus standard of care (SoC) in comparison with current SoC alone from the Iranian payer perspective.Methods:A cohort-based Markov model was developed to assess the incremental cost-effectiveness ratio (ICER) over a 10-year time horizon in a cohort of 1,000 patients. The baseline transition probabilities between New York Heart Association (NYHA), mortality rate, and hospitalization rate were extracted from the literature. The effect of ivabradine on mortality, hospitalization, and NYHA improvement or worsening were retrieved from the SHIFT study. The effectiveness was measured as quality-adjusted life-years (QALYs) using the utility values derived from Iranian Heart Failure Quality of Life study. Direct medical costs were obtained from hospital records and national tariffs. Deterministic and probabilistic sensitivity analyses were conducted to show the robustness of the model.Results:Ivabradine therapy was associated with an incremental cost per QALY of USD $5,437 (incremental cost of USD $2,207 and QALYs gained 0.41) versus SoC. The probabilistic sensitivity analysis showed that ivabradine is expected to have a 60 percent chance of being cost-effective accepting a threshold of USD $6,550 per QALY. Furthermore, deterministic sensitivity analysis indicated that the model is sensitive to the ivabradine drug acquisition cost.Conclusions:The cost-effectiveness model suggested that the addition of ivabradine to SoC therapy was associated with improved clinical outcomes along with increased costs. The analysis indicates that the clinical benefit of ivabradine can be achieved at a reasonable cost in eligible heart failure patients with sinus rhythm and a baseline heart rate ≥ 75 beats per minute (bpm).


2017 ◽  
Vol 28 (2) ◽  
pp. 515-531 ◽  
Author(s):  
Lawrence C McCandless ◽  
Julian M Somers

Causal mediation analysis techniques enable investigators to examine whether the effect of the exposure on an outcome is mediated by some intermediate variable. Motivated by a data example from epidemiology, we consider estimation of natural direct and indirect effects on a survival outcome. An important concern is bias from confounders that may be unmeasured. Estimating natural direct and indirect effects requires an elaborate series of assumptions in order to identify the target quantities. The analyst must carefully measure and adjust for important predictors of the exposure, mediator and outcome. Omitting important confounders may bias the results in a way that is difficult to predict. In recent years, several methods have been proposed to explore sensitivity to unmeasured confounding in mediation analysis. However, many of these methods limit complexity by relying on a handful of sensitivity parameters that are difficult to interpret, or alternatively, by assuming that specific patterns of unmeasured confounding are absent. Instead, we propose a simple Bayesian sensitivity analysis technique that is indexed by four bias parameters. Our method has the unique advantage that it is able to simultaneously assess unmeasured confounding in the mediator–outcome, exposure–outcome and exposure–mediator relationships. It is a natural Bayesian extension of the sensitivity analysis methodologies of VanderWeele, which have been widely used in the epidemiology literature. We present simulation findings, and additionally, we illustrate the method in an epidemiological study of mortality rates in criminal offenders from British Columbia.


2018 ◽  
Vol 859 ◽  
pp. 516-542 ◽  
Author(s):  
Calum S. Skene ◽  
Peter J. Schmid

A linear numerical study is conducted to quantify the effect of swirl on the response behaviour of premixed lean flames to general harmonic excitation in the inlet, upstream of combustion. This study considers axisymmetric M-flames and is based on the linearised compressible Navier–Stokes equations augmented by a simple one-step irreversible chemical reaction. Optimal frequency response gains for both axisymmetric and non-axisymmetric perturbations are computed via a direct–adjoint methodology and singular value decompositions. The high-dimensional parameter space, containing perturbation and base-flow parameters, is explored by taking advantage of generic sensitivity information gained from the adjoint solutions. This information is then tailored to specific parametric sensitivities by first-order perturbation expansions of the singular triplets about the respective parameters. Valuable flow information, at a negligible computational cost, is gained by simple weighted scalar products between direct and adjoint solutions. We find that for non-swirling flows, a mode with azimuthal wavenumber $m=2$ is the most efficiently driven structure. The structural mechanism underlying the optimal gains is shown to be the Orr mechanism for $m=0$ and a blend of Orr and other mechanisms, such as lift-up, for other azimuthal wavenumbers. Further to this, velocity and pressure perturbations are shown to make up the optimal input and output showing that the thermoacoustic mechanism is crucial in large energy amplifications. For $m=0$ these velocity perturbations are mainly longitudinal, but for higher wavenumbers azimuthal velocity fluctuations become prominent, especially in the non-swirling case. Sensitivity analyses are carried out with respect to the Mach number, Reynolds number and swirl number, and the accuracy of parametric gradients of the frequency response curve is assessed. The sensitivity analysis reveals that increases in Reynolds and Mach numbers yield higher gains, through a decrease in temperature diffusion. A rise in mean-flow swirl is shown to diminish the gain, with increased damping for higher azimuthal wavenumbers. This leads to a reordering of the most effectively amplified mode, with the axisymmetric ($m=0$) mode becoming the dominant structure at moderate swirl numbers.


2018 ◽  
Vol 128 (6) ◽  
pp. 1792-1798 ◽  
Author(s):  
Gurpreet S. Gandhoke ◽  
Yash K. Pandya ◽  
Ashutosh P. Jadhav ◽  
Tudor Jovin ◽  
Robert M. Friedlander ◽  
...  

OBJECTIVEThe price of coils used for intracranial aneurysm embolization has continued to rise despite an increase in competition in the marketplace. Coils on the US market range in list price from $500 to $3000. The purpose of this study was to investigate potential cost savings with the use of a price capitation model.METHODSThe authors built a clinical decision analytical tree and compared their institution’s current expenditure on endovascular coils to the costs if a capped-price model were implemented. They retrospectively reviewed coil and cost data for 148 patients who underwent coil embolization from January 2015 through September 2016. Data on the length and number of coils used in all patients were collected and analyzed. The probabilities of a treated aneurysm being ≤/> 10 mm in maximum dimension, the total number of coils used for a case being ≤/> 5, and the total length of coils used for a case being ≤/> 50 cm were calculated, as was the mean cost of the currently used coils for all possible combinations of events with these probabilities. Using the same probabilities, the authors calculated the expected value of the capped-price strategy in comparison with the current one. They also conducted multiple 1-way sensitivity analyses by applying plausible ranges to the probabilities and cost variables. The robustness of the results was confirmed by applying individual distributions to all studied variables and conducting probabilistic sensitivity analysis.RESULTSNinety-five (64%) of 148 patients presented with a rupture, and 53 (36%) were treated on an elective basis. The mean aneurysm size was 6.7 mm. A total of 1061 coils were used from a total of 4 different providers. Companies A (72%) and B (16%) accounted for the major share of coil consumption. The mean number of coils per case was 7.3. The mean cost per case (for all coils) was $10,434. The median total length of coils used, for all coils, was 42 cm. The calculated probability of treating an aneurysm less than 10 mm in maximum dimension was 0.83, for using 5 coils or fewer per case it was 0.42, and for coil length of 50 cm or less it was 0.89. The expected cost per case with the capped policy was calculated to be $4000, a cost savings of $6564 in comparison with using the price of Company A. Multiple 1-way sensitivity analyses revealed that the capped policy was cost saving if its cost was less than $10,500. In probabilistic sensitivity analyses, the lowest cost difference between current and capped policies was $2750.CONCLUSIONSIn comparison with the cost of coils from the authors’ current provider, their decision model and probabilistic sensitivity analysis predicted a minimum $407,000 to a maximum $1,799,976 cost savings in 148 cases by adapting the capped-price policy for coils.


Sign in / Sign up

Export Citation Format

Share Document