scholarly journals A unified framework for efficient estimation of general treatment models

2021 ◽  
Vol 12 (3) ◽  
pp. 779-816 ◽  
Author(s):  
Chunrong Ai ◽  
Oliver Linton ◽  
Kaiji Motegi ◽  
Zheng Zhang

This paper presents a weighted optimization framework that unifies the binary, multivalued, and continuous treatment—as well as mixture of discrete and continuous treatment—under a unconfounded treatment assignment. With a general loss function, the framework includes the average, quantile, and asymmetric least squares causal effect of treatment as special cases. For this general framework, we first derive the semiparametric efficiency bound for the causal effect of treatment, extending the existing bound results to a wider class of models. We then propose a generalized optimization estimator for the causal effect with weights estimated by solving an expanding set of equations. Under some sufficient conditions, we establish the consistency and asymptotic normality of the proposed estimator of the causal effect and show that the estimator attains the semiparametric efficiency bound, thereby extending the existing literature on efficient estimation of causal effect to a wider class of applications. Finally, we discuss estimation of some causal effect functionals such as the treatment effect curve and the average outcome. To evaluate the finite sample performance of the proposed procedure, we conduct a small‐scale simulation study and find that the proposed estimation has practical value. In an empirical application, we detect a significant causal effect of political advertisements on campaign contributions in the binary treatment model, but not in the continuous treatment model.

Methodology ◽  
2012 ◽  
Vol 8 (1) ◽  
pp. 23-38 ◽  
Author(s):  
Manuel C. Voelkle ◽  
Patrick E. McKnight

The use of latent curve models (LCMs) has increased almost exponentially during the last decade. Oftentimes, researchers regard LCM as a “new” method to analyze change with little attention paid to the fact that the technique was originally introduced as an “alternative to standard repeated measures ANOVA and first-order auto-regressive methods” (Meredith & Tisak, 1990, p. 107). In the first part of the paper, this close relationship is reviewed, and it is demonstrated how “traditional” methods, such as the repeated measures ANOVA, and MANOVA, can be formulated as LCMs. Given that latent curve modeling is essentially a large-sample technique, compared to “traditional” finite-sample approaches, the second part of the paper addresses the question to what degree the more flexible LCMs can actually replace some of the older tests by means of a Monte-Carlo simulation. In addition, a structural equation modeling alternative to Mauchly’s (1940) test of sphericity is explored. Although “traditional” methods may be expressed as special cases of more general LCMs, we found the equivalence holds only asymptotically. For practical purposes, however, no approach always outperformed the other alternatives in terms of power and type I error, so the best method to be used depends on the situation. We provide detailed recommendations of when to use which method.


Author(s):  
Lloyd Whitesell

This chapter explores special cases where glamour conventions demonstrate aestheticist values, that is, the exaltation of style for its own sake. At such times, the aesthetic intensity of glamour seems to offer an escape to a world of pure artifice, beauty, and style. The discussion identifies the central values of aestheticism as expressed in the high-art milieu and illustrates the same values at work in glamorous numbers. To analyze ultrastylishness in musical arrangement, it considers finesse on a small scale (e.g., contrapuntal ornamentation, textural and harmonic ingenuity) before turning to ingenuity of overall design in numbers such as “Dancing in the Dark,” from the film The Band Wagon, and “This Heart of Mine,” from Ziegfeld Follies.


2015 ◽  
Vol 3 (2) ◽  
pp. 157-175 ◽  
Author(s):  
Peter B. Gilbert ◽  
Erin E. Gabriel ◽  
Ying Huang ◽  
Ivan S.F. Chan

AbstractA common problem of interest within a randomized clinical trial is the evaluation of an inexpensive response endpoint as a valid surrogate endpoint for a clinical endpoint, where a chief purpose of a valid surrogate is to provide a way to make correct inferences on clinical treatment effects in future studies without needing to collect the clinical endpoint data. Within the principal stratification framework for addressing this problem based on data from a single randomized clinical efficacy trial, a variety of definitions and criteria for a good surrogate endpoint have been proposed, all based on or closely related to the “principal effects” or “causal effect predictiveness (CEP)” surface. We discuss CEP-based criteria for a useful surrogate endpoint, including (1) the meaning and relative importance of proposed criteria including average causal necessity (ACN), average causal sufficiency (ACS), and large clinical effect modification; (2) the relationship between these criteria and the Prentice definition of a valid surrogate endpoint; and (3) the relationship between these criteria and the consistency criterion (i.e. assurance against the “surrogate paradox”). This includes the result that ACN plus a strong version of ACS generally do not imply the Prentice definition nor the consistency criterion, but they do have these implications in special cases. Moreover, the converse does not hold except in a special case with a binary candidate surrogate. The results highlight that assumptions about the treatment effect on the clinical endpoint before the candidate surrogate is measured are influential for the ability to draw conclusions about the Prentice definition or consistency. In addition, we emphasize that in some scenarios that occur commonly in practice, the principal strata subpopulations for inference are identifiable from the observable data, in which cases the principal stratification framework has relatively high utility for the purpose of effect modification analysis and is closely connected to the treatment marker selection problem. The results are illustrated with application to a vaccine efficacy trial, where ACN and ACS for an antibody marker are found to be consistent with the data and hence support the Prentice definition and consistency.


2006 ◽  
Vol 226 (1) ◽  
Author(s):  
Anton L. Flossmann ◽  
Winfried Pohlmeier

SummaryThis paper surveys the empirical evidence on causal effects of education on earnings for Germany and compares alternative studies in the light of their underlying identifying assumptions. We work out the different assumptions taken by various studies, which lead to rather different interpretations of the estimated causal effect. In particular, we are interested in the question to what extend causal return estimates are informative regarding educational policy advice. Despite the substantial methodological differences, we have to conclude that the empirical findings for Germany are quite robust and do not deviate substantially from each other. This also holds for the few studies which rely on ignorability conditions, regardless of whether they use educational attainment as a continuous treatment variable or as a discrete treatment indicator. Own estimates based on the matching approach indicate that the selection into upper secondary schooling is suboptimal


Author(s):  
Bart Jacobs ◽  
Aleks Kissinger ◽  
Fabio Zanasi

Abstract Extracting causal relationships from observed correlations is a growing area in probabilistic reasoning, originating with the seminal work of Pearl and others from the early 1990s. This paper develops a new, categorically oriented view based on a clear distinction between syntax (string diagrams) and semantics (stochastic matrices), connected via interpretations as structure-preserving functors. A key notion in the identification of causal effects is that of an intervention, whereby a variable is forcefully set to a particular value independent of any prior propensities. We represent the effect of such an intervention as an endo-functor which performs ‘string diagram surgery’ within the syntactic category of string diagrams. This diagram surgery in turn yields a new, interventional distribution via the interpretation functor. While in general there is no way to compute interventional distributions purely from observed data, we show that this is possible in certain special cases using a calculational tool called comb disintegration. We demonstrate the use of this technique on two well-known toy examples: one where we predict the causal effect of smoking on cancer in the presence of a confounding common cause and where we show that this technique provides simple sufficient conditions for computing interventions which apply to a wide variety of situations considered in the causal inference literature; the other one is an illustration of counterfactual reasoning where the same interventional techniques are used, but now in a ‘twinned’ set-up, with two version of the world – one factual and one counterfactual – joined together via exogenous variables that capture the uncertainties at hand.


2016 ◽  
Vol 33 (5) ◽  
pp. 1218-1241 ◽  
Author(s):  
Hiroaki Kaido

This paper studies the identification and estimation of weighted average derivatives of conditional location functionals including conditional mean and conditional quantiles in settings where either the outcome variable or a regressor is interval-valued. Building on Manski and Tamer (2002, Econometrica 70(2), 519–546) who study nonparametric bounds for mean regression with interval data, we characterize the identified set of weighted average derivatives of regression functions. Since the weighted average derivatives do not rely on parametric specifications for the regression functions, the identified set is well-defined without any functional-form assumptions. Under general conditions, the identified set is compact and convex and hence admits characterization by its support function. Using this characterization, we derive the semiparametric efficiency bound of the support function when the outcome variable is interval-valued. Using mean regression as an example, we further demonstrate that the support function can be estimated in a regular manner by a computationally simple estimator and that the efficiency bound can be achieved.


Biostatistics ◽  
2018 ◽  
Vol 21 (3) ◽  
pp. 545-560 ◽  
Author(s):  
Michal Juraska ◽  
Ying Huang ◽  
Peter B Gilbert

Summary An objective in randomized clinical trials is the evaluation of “principal surrogates,” which consists of analyzing how the treatment effect on a clinical endpoint varies over principal strata subgroups defined by an intermediate response outcome under both or one of the treatment assignments. The latter effect modification estimand has been termed the marginal causal effect predictiveness (mCEP) curve. This objective was addressed in two randomized placebo-controlled Phase 3 dengue vaccine trials for an antibody response biomarker whose sampling design rendered previously developed inferential methods highly inefficient due to a three-phase sampling design. In this design, the biomarker was measured in a case-cohort sample and a key baseline auxiliary strongly associated with the biomarker (the “baseline surrogate measure”) was only measured in a further sub-sample. We propose a novel approach to estimation of the mCEP curve in such three-phase sampling designs that avoids the restrictive “placebo structural risk” modeling assumption common to past methods and that further improves robustness by the use of non-parametric kernel smoothing for biomarker density estimation. Additionally, we develop bootstrap-based procedures for pointwise and simultaneous confidence intervals and testing of four relevant hypotheses about the mCEP curve. We investigate the finite-sample properties of the proposed methods and compare them to those of an alternative method making the placebo structural risk assumption. Finally, we apply the novel and alternative procedures to the two dengue vaccine trial data sets.


Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 186 ◽  
Author(s):  
Ki-Soon Yu ◽  
Sung-Hyun Kim ◽  
Dae-Woon Lim ◽  
Young-Sik Kim

In this paper, we propose an intrusion detection system based on the estimation of the Rényi entropy with multiple orders. The Rényi entropy is a generalized notion of entropy that includes the Shannon entropy and the min-entropy as special cases. In 2018, Kim proposed an efficient estimation method for the Rényi entropy with an arbitrary real order α . In this work, we utilize this method to construct a multiple order, Rényi entropy based intrusion detection system (IDS) for vehicular systems with various network connections. The proposed method estimates the Rényi entropies simultaneously with three distinct orders, two, three, and four, based on the controller area network (CAN)-IDs of consecutively generated frames. The collected frames are split into blocks with a fixed number of frames, and the entropies are evaluated based on these blocks. For a more accurate estimation against each type of attack, we also propose a retrospective sliding window method for decision of attacks based on the estimated entropies. For fair comparison, we utilized the CAN-ID attack data set generated by a research team from Korea University. Our results show that the proposed method can show the false negative and positive errors of less than 1% simultaneously.


Sign in / Sign up

Export Citation Format

Share Document