scholarly journals A Generally Efficient Targeted Minimum Loss Based Estimator based on the Highly Adaptive Lasso

Author(s):  
Mark van der Laan

Abstract Suppose we observe $n$ independent and identically distributed observations of a finite dimensional bounded random variable. This article is concerned with the construction of an efficient targeted minimum loss-based estimator (TMLE) of a pathwise differentiable target parameter of the data distribution based on a realistic statistical model. The only smoothness condition we will enforce on the statistical model is that the nuisance parameters of the data distribution that are needed to evaluate the canonical gradient of the pathwise derivative of the target parameter are multivariate real valued cadlag functions (right-continuous and left-hand limits, (G. Neuhaus. On weak convergence of stochastic processes with multidimensional time parameter. Ann Stat 1971;42:1285–1295.) and have a finite supremum and (sectional) variation norm. Each nuisance parameter is defined as a minimizer of the expectation of a loss function over over all functions it its parameter space. For each nuisance parameter, we propose a new minimum loss based estimator that minimizes the loss-specific empirical risk over the functions in its parameter space under the additional constraint that the variation norm of the function is bounded by a set constant. The constant is selected with cross-validation. We show such an MLE can be represented as the minimizer of the empirical risk over linear combinations of indicator basis functions under the constraint that the sum of the absolute value of the coefficients is bounded by the constant: i.e., the variation norm corresponds with this $L_1$-norm of the vector of coefficients. We will refer to this estimator as the highly adaptive Lasso (HAL)-estimator. We prove that for all models the HAL-estimator converges to the true nuisance parameter value at a rate that is faster than $n^{-1/4}$ w.r.t. square-root of the loss-based dissimilarity. We also show that if this HAL-estimator is included in the library of an ensemble super-learner, then the super-learner will at minimal achieve the rate of convergence of the HAL, but, by previous results, it will actually be asymptotically equivalent with the oracle (i.e., in some sense best) estimator in the library. Subsequently, we establish that a one-step TMLE using such a super-learner as initial estimator for each of the nuisance parameters is asymptotically efficient at any data generating distribution in the model, under weak structural conditions on the target parameter mapping and model and a strong positivity assumption (e.g., the canonical gradient is uniformly bounded). We demonstrate our general theorem by constructing such a one-step TMLE of the average causal effect in a nonparametric model, and establishing that it is asymptotically efficient.

2016 ◽  
Vol 12 (1) ◽  
pp. 351-378 ◽  
Author(s):  
Mark van der Laan ◽  
Susan Gruber

AbstractConsider a study in which one observesnindependent and identically distributed random variables whose probability distribution is known to be an element of a particular statistical model, and one is concerned with estimation of a particular real valued pathwise differentiable target parameter of this data probability distribution. The targeted maximum likelihood estimator (TMLE) is an asymptotically efficient substitution estimator obtained by constructing a so called least favorable parametric submodel through an initial estimator with score, at zero fluctuation of the initial estimator, that spans the efficient influence curve, and iteratively maximizing the corresponding parametric likelihood till no more updates occur, at which point the updated initial estimator solves the so called efficient influence curve equation. In this article we construct a one-dimensional universal least favorable submodel for which the TMLE only takes one step, and thereby requires minimal extra data fitting to achieve its goal of solving the efficient influence curve equation. We generalize these to universal least favorable submodels through the relevant part of the data distribution as required for targeted minimum loss-based estimation. Finally, remarkably, given a multidimensional target parameter, we develop a universal canonical one-dimensional submodel such that the one-step TMLE, only maximizing the log-likelihood over a univariate parameter, solves the multivariate efficient influence curve equation. This allows us to construct a one-step TMLE based on a one-dimensional parametric submodel through the initial estimator, that solves any multivariate desired set of estimating equations.


Author(s):  
Dylan J. Foster ◽  
Vasilis Syrgkanis

We provide excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate a target parameter depends on an unknown parameter that must be estimated from data (a "nuisance parameter"). We analyze a two-stage sample splitting meta-algorithm that takes as input two arbitrary estimation algorithms: one for the target parameter and one for the nuisance parameter. We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order. Our theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from statistical learning and machine learning literature to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can give guarantees under weaker assumptions than in previous works and accommodate the case where the target parameter belongs to a complex nonparametric class. We characterize conditions on the metric entropy such that oracle rates---rates of the same order as if we knew the nuisance parameter---are achieved. We also analyze the rates achieved by specific estimation algorithms such as variance-penalized empirical risk minimization, neural network estimation and sparse high-dimensional linear model estimation. We highlight the applicability of our results in four settings of central importance in the literature: 1) heterogeneous treatment effect estimation, 2) offline policy optimization, 3) domain adaptation, and 4) learning with missing data.


2019 ◽  
Vol 488 (4) ◽  
pp. 5093-5103 ◽  
Author(s):  
Justin Alsing ◽  
Benjamin Wandelt

ABSTRACT We show how nuisance parameter marginalized posteriors can be inferred directly from simulations in a likelihood-free setting, without having to jointly infer the higher dimensional interesting and nuisance parameter posterior first and marginalize a posteriori. The result is that for an inference task with a given number of interesting parameters, the number of simulations required to perform likelihood-free inference can be kept (roughly) the same irrespective of the number of additional nuisances to be marginalized over. To achieve this, we introduce two extensions to the standard likelihood-free inference set-up. First, we show how nuisance parameters can be recast as latent variables and hence automatically marginalized over in the likelihood-free framework. Secondly, we derive an asymptotically optimal compression from N data to n summaries – one per interesting parameter - such that the Fisher information is (asymptotically) preserved, but the summaries are insensitive to the nuisance parameters. This means that the nuisance marginalized inference task involves learning n interesting parameters from n ‘nuisance hardened’ data summaries, regardless of the presence or number of additional nuisance parameters to be marginalized over. We validate our approach on two examples from cosmology: supernovae and weak-lensing data analyses with nuisance parametrized systematics. For the supernova problem, high-fidelity posterior inference of Ωm and w0 (marginalized over systematics) can be obtained from just a few hundred data simulations. For the weak-lensing problem, six cosmological parameters can be inferred from just $\mathcal {O}(10^3)$ simulations, irrespective of whether 10 additional nuisance parameters are included in the problem or not.


2014 ◽  
Vol 2 (1) ◽  
pp. 13-74 ◽  
Author(s):  
Mark J. van der Laan

AbstractSuppose that we observe a population of causally connected units. On each unit at each time-point on a grid we observe a set of other units the unit is potentially connected with, and a unit-specific longitudinal data structure consisting of baseline and time-dependent covariates, a time-dependent treatment, and a final outcome of interest. The target quantity of interest is defined as the mean outcome for this group of units if the exposures of the units would be probabilistically assigned according to a known specified mechanism, where the latter is called a stochastic intervention. Causal effects of interest are defined as contrasts of the mean of the unit-specific outcomes under different stochastic interventions one wishes to evaluate. This covers a large range of estimation problems from independent units, independent clusters of units, and a single cluster of units in which each unit has a limited number of connections to other units. The allowed dependence includes treatment allocation in response to data on multiple units and so called causal interference as special cases. We present a few motivating classes of examples, propose a structural causal model, define the desired causal quantities, address the identification of these quantities from the observed data, and define maximum likelihood based estimators based on cross-validation. In particular, we present maximum likelihood based super-learning for this network data. Nonetheless, such smoothed/regularized maximum likelihood estimators are not targeted and will thereby be overly bias w.r.t. the target parameter, and, as a consequence, generally not result in asymptotically normally distributed estimators of the statistical target parameter.To formally develop estimation theory, we focus on the simpler case in which the longitudinal data structure is a point-treatment data structure. We formulate a novel targeted maximum likelihood estimator of this estimand and show that the double robustness of the efficient influence curve implies that the bias of the targeted minimum loss-based estimation (TMLE) will be a second-order term involving squared differences of two nuisance parameters. In particular, the TMLE will be consistent if either one of these nuisance parameters is consistently estimated. Due to the causal dependencies between units, the data set may correspond with the realization of a single experiment, so that establishing a (e.g. normal) limit distribution for the targeted maximum likelihood estimators, and corresponding statistical inference, is a challenging topic. We prove two formal theorems establishing the asymptotic normality using advances in weak-convergence theory. We conclude with a discussion and refer to an accompanying technical report for extensions to general longitudinal data structures.


2014 ◽  
Vol 31 (6) ◽  
pp. 1192-1228 ◽  
Author(s):  
Firmin Doko Tchatoka

This paper explores the sensitivity of plug-in subset tests to instrument exclusion in structural models. Identification-robust statistics based on the plug-in principle have been developed for testing hypotheses specified on subsets of the structural parameters. However, their robustness to instrument exclusion has not been investigated. This paper proposes an analysis of the asymptotic distributions of the limited information maximum likelihood (LIML) estimator and plug-in statistics when potential instruments are omitted. Our results provide several new insights and extensions of earlier studies. We show that the exclusion of instruments can eliminate the first-stage, thus weakening identification and invalidating the plug-in subset inference. However, when instrument omission does not affect LIML consistency, it preserves the plug-in subset test validity, although LIML is no longer asymptotically efficient. Unlike the instrumental variable (IV) estimator, the LIML estimator of the identified linear combination of the nuisance parameter is not asymptotically a Gaussian mixture, even without instrument exclusion.


2016 ◽  
Vol 12 (1) ◽  
pp. 253-282 ◽  
Author(s):  
Karel Vermeulen ◽  
Stijn Vansteelandt

Abstract Doubly robust estimators have now been proposed for a variety of target parameters in the causal inference and missing data literature. These consistently estimate the parameter of interest under a semiparametric model when one of two nuisance working models is correctly specified, regardless of which. The recently proposed bias-reduced doubly robust estimation procedure aims to partially retain this robustness in more realistic settings where both working models are misspecified. These so-called bias-reduced doubly robust estimators make use of special (finite-dimensional) nuisance parameter estimators that are designed to locally minimize the squared asymptotic bias of the doubly robust estimator in certain directions of these finite-dimensional nuisance parameters under misspecification of both parametric working models. In this article, we extend this idea to incorporate the use of data-adaptive estimators (infinite-dimensional nuisance parameters), by exploiting the bias reduction estimation principle in the direction of only one nuisance parameter. We additionally provide an asymptotic linearity theorem which gives the influence function of the proposed doubly robust estimator under correct specification of a parametric nuisance working model for the missingness mechanism/propensity score but a possibly misspecified (finite- or infinite-dimensional) outcome working model. Simulation studies confirm the desirable finite-sample performance of the proposed estimators relative to a variety of other doubly robust estimators.


1998 ◽  
Vol 14 (3) ◽  
pp. 295-325 ◽  
Author(s):  
Maxwell B. Stinchcombe ◽  
Halbert White

The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly extend the nuisance parameter approach. How and why the nuisance parameter approach works and how it can be extended bear closely on recent developments in artificial neural networks. Statistical content is provided by viewing specification tests with nuisance parameters as tests of hypotheses about Banach-valued random elements and applying the Banach central limit theorem and law of iterated logarithm, leading to simple procedures that can be used as a guide to when computationally more elaborate procedures may be warranted.


Author(s):  
Linh Tran ◽  
Constantin Yiannoutsos ◽  
Kara Wools-Kaloustian ◽  
Abraham Siika ◽  
Mark van der Laan ◽  
...  

Abstract A number of sophisticated estimators of longitudinal effects have been proposed for estimating the intervention-specific mean outcome. However, there is a relative paucity of research comparing these methods directly to one another. In this study, we compare various approaches to estimating a causal effect in a longitudinal treatment setting using both simulated data and data measured from a human immunodeficiency virus cohort. Six distinct estimators are considered: (i) an iterated conditional expectation representation, (ii) an inverse propensity weighted method, (iii) an augmented inverse propensity weighted method, (iv) a double robust iterated conditional expectation estimator, (v) a modified version of the double robust iterated conditional expectation estimator, and (vi) a targeted minimum loss-based estimator. The details of each estimator and its implementation are presented along with nuisance parameter estimation details, which include potentially pooling the observed data across all subjects regardless of treatment history and using data adaptive machine learning algorithms. Simulations are constructed over six time points, with each time point steadily increasing in positivity violations. Estimation is carried out for both the simulations and applied example using each of the six estimators under both stratified and pooled approaches of nuisance parameter estimation. Simulation results show that double robust estimators remained without meaningful bias as long as at least one of the two nuisance parameters were estimated with a correctly specified model. Under full misspecification, the bias of the double robust estimators remained better than that of the inverse propensity estimator under misspecification, but worse than the iterated conditional expectation estimator. Weighted estimators tended to show better performance than the covariate estimators. As positivity violations increased, the mean squared error and bias of all estimators considered became worse, with covariate-based double robust estimators especially susceptible. Applied analyses showed similar estimates at most time points, with the important exception of the inverse propensity estimator which deviated markedly as positivity violations increased. Given its efficiency, ability to respect the parameter space, and observed performance, we recommend the pooled and weighted targeted minimum loss-based estimator.


2020 ◽  
Vol 3 (2) ◽  
pp. 119-148
Author(s):  
H. S. Battey ◽  
D. R. Cox

AbstractParametric statistical problems involving both large amounts of data and models with many parameters raise issues that are explicitly or implicitly differential geometric. When the number of nuisance parameters is comparable to the sample size, alternative approaches to inference on interest parameters treat the nuisance parameters either as random variables or as arbitrary constants. The two approaches are compared in the context of parametric survival analysis, with emphasis on the effects of misspecification of the random effects distribution. Notably, we derive a detailed expression for the precision of the maximum likelihood estimator of an interest parameter when the assumed random effects model is erroneous, recovering simply derived results based on the Fisher information in the correctly specified situation but otherwise illustrating complex dependence on other aspects. Methods of assessing model adequacy are given. The results are both directly applicable and illustrate general principles of inference when there is a high-dimensional nuisance parameter. Open problems with an information geometrical bearing are outlined.


Author(s):  
Mark J. van der Laan

AbstractIn order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special case, we also demonstrate the required targeting of the propensity score for the inverse probability of treatment weighted estimator using super-learning to fit the propensity score.


Sign in / Sign up

Export Citation Format

Share Document