On asymptotic normality of the least square estimators of an infinite-dimensional parameter

1993 ◽  
Vol 45 (1) ◽  
pp. 48-58
Author(s):  
A. Ya. Dorogovtsev
2014 ◽  
Vol 30 (5) ◽  
pp. 1021-1076 ◽  
Author(s):  
Herman J. Bierens

This paper considers sieve maximum likelihood estimation of seminonparametric (SNP) models with an unknown density function as non-Euclidean parameter, next to a finite-dimensional parameter vector. The density function involved is modeled via an infinite series expansion, so that the actual parameter space is infinite-dimensional. It will be shown that under low-level conditions the sieve estimators of these parameters are consistent, and the estimators of the Euclidean parameters are$\sqrt N$asymptotically normal, given a random sample of sizeN. The latter result is derived in a different way than in the sieve estimation literature. It appears that this asymptotic normality result is in essence the same as for the finite dimensional case. This approach is motivated and illustrated by an SNP discrete choice model.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


Sign in / Sign up

Export Citation Format

Share Document