Asymptotic Normality of an Estimator of an Infinite-dimensional Parameter in a Model with a C1smooth Regression Function

1995 ◽  
pp. 251-256
2014 ◽  
Vol 30 (5) ◽  
pp. 1021-1076 ◽  
Author(s):  
Herman J. Bierens

This paper considers sieve maximum likelihood estimation of seminonparametric (SNP) models with an unknown density function as non-Euclidean parameter, next to a finite-dimensional parameter vector. The density function involved is modeled via an infinite series expansion, so that the actual parameter space is infinite-dimensional. It will be shown that under low-level conditions the sieve estimators of these parameters are consistent, and the estimators of the Euclidean parameters are$\sqrt N$asymptotically normal, given a random sample of sizeN. The latter result is derived in a different way than in the sieve estimation literature. It appears that this asymptotic normality result is in essence the same as for the finite dimensional case. This approach is motivated and illustrated by an SNP discrete choice model.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


2016 ◽  
Vol 219 (5) ◽  
pp. 731-742
Author(s):  
V. A. Ershov ◽  
I. A. Ibragimov

2002 ◽  
Vol 18 (2) ◽  
pp. 420-468 ◽  
Author(s):  
Oliver Linton ◽  
Yoon-Jae Whang

We introduce a kernel-based estimator of the density function and regression function for data that have been grouped into family totals. We allow for a common intrafamily component but require that observations from different families be independent. We establish consistency and asymptotic normality for our procedures. As usual, the rates of convergence can be very slow depending on the behavior of the characteristic function at infinity. We investigate the practical performance of our method in a simple Monte Carlo experiment.


Test ◽  
2019 ◽  
Vol 29 (4) ◽  
pp. 966-988
Author(s):  
Francesco Bravo

AbstractThis paper considers estimation and inference for a class of varying coefficient models in which some of the responses and some of the covariates are missing at random and outliers are present. The paper proposes two general estimators—and a computationally attractive and asymptotically equivalent one-step version of them—that combine inverse probability weighting and robust local linear estimation. The paper also considers inference for the unknown infinite-dimensional parameter and proposes two Wald statistics that are shown to have power under a sequence of local Pitman drifts and are consistent as the drifts diverge. The results of the paper are illustrated with three examples: robust local generalized estimating equations, robust local quasi-likelihood and robust local nonlinear least squares estimation. A simulation study shows that the proposed estimators and test statistics have competitive finite sample properties, whereas two empirical examples illustrate the applicability of the proposed estimation and testing methods.


Sign in / Sign up

Export Citation Format

Share Document