scholarly journals Closed-form maximum likelihood estimator for generalized linear models in the case of categorical explanatory variables: application to insurance loss modeling

2019 ◽  
Vol 35 (2) ◽  
pp. 689-724 ◽  
Author(s):  
Alexandre Brouste ◽  
Christophe Dutang ◽  
Tom Rohmer
2013 ◽  
Vol 55 (3) ◽  
pp. 643-652
Author(s):  
Gauss M. Cordeiro ◽  
Denise A. Botter ◽  
Alexsandro B. Cavalcanti ◽  
Lúcia P. Barroso

Biometrika ◽  
2020 ◽  
Author(s):  
Ioannis Kosmidis ◽  
David Firth

Summary Penalization of the likelihood by Jeffreys’ invariant prior, or a positive power thereof, is shown to produce finite-valued maximum penalized likelihood estimates in a broad class of binomial generalized linear models. The class of models includes logistic regression, where the Jeffreys-prior penalty is known additionally to reduce the asymptotic bias of the maximum likelihood estimator, and models with other commonly used link functions, such as probit and log-log. Shrinkage towards equiprobability across observations, relative to the maximum likelihood estimator, is established theoretically and studied through illustrative examples. Some implications of finiteness and shrinkage for inference are discussed, particularly when inference is based on Wald-type procedures. A widely applicable procedure is developed for computation of maximum penalized likelihood estimates, by using repeated maximum likelihood fits with iteratively adjusted binomial responses and totals. These theoretical results and methods underpin the increasingly widespread use of reduced-bias and similarly penalized binomial regression models in many applied fields.


Sign in / Sign up

Export Citation Format

Share Document