A robust conditional maximum likelihood estimator for generalized linear models with a dispersion parameter

Test ◽  
2018 ◽  
Vol 28 (1) ◽  
pp. 223-241 ◽  
Author(s):  
Alfio Marazzi ◽  
Marina Valdora ◽  
Victor Yohai ◽  
Michael Amiguet
2013 ◽  
Vol 55 (3) ◽  
pp. 643-652
Author(s):  
Gauss M. Cordeiro ◽  
Denise A. Botter ◽  
Alexsandro B. Cavalcanti ◽  
Lúcia P. Barroso

2001 ◽  
Vol 17 (5) ◽  
pp. 913-932 ◽  
Author(s):  
Jinyong Hahn

In this paper, I calculate the semiparametric information bound in two dynamic panel data logit models with individual specific effects. In such a model without any other regressors, it is well known that the conditional maximum likelihood estimator yields a √n-consistent estimator. In the case where the model includes strictly exogenous continuous regressors, Honoré and Kyriazidou (2000, Econometrica 68, 839–874) suggest a consistent estimator whose rate of convergence is slower than √n. Information bounds calculated in this paper suggest that the conditional maximum likelihood estimator is not efficient for models without any other regressor and that √n-consistent estimation is infeasible in more general models.


Biometrika ◽  
2020 ◽  
Author(s):  
Ioannis Kosmidis ◽  
David Firth

Summary Penalization of the likelihood by Jeffreys’ invariant prior, or a positive power thereof, is shown to produce finite-valued maximum penalized likelihood estimates in a broad class of binomial generalized linear models. The class of models includes logistic regression, where the Jeffreys-prior penalty is known additionally to reduce the asymptotic bias of the maximum likelihood estimator, and models with other commonly used link functions, such as probit and log-log. Shrinkage towards equiprobability across observations, relative to the maximum likelihood estimator, is established theoretically and studied through illustrative examples. Some implications of finiteness and shrinkage for inference are discussed, particularly when inference is based on Wald-type procedures. A widely applicable procedure is developed for computation of maximum penalized likelihood estimates, by using repeated maximum likelihood fits with iteratively adjusted binomial responses and totals. These theoretical results and methods underpin the increasingly widespread use of reduced-bias and similarly penalized binomial regression models in many applied fields.


Sign in / Sign up

Export Citation Format

Share Document