The gradient test and its finite sample size properties in a conditional maximum likelihood and psychometric modeling context

Author(s):  
Clemens Draxler ◽  
Andreas Kurz ◽  
Artur J. Lemonte
2001 ◽  
Vol 17 (5) ◽  
pp. 913-932 ◽  
Author(s):  
Jinyong Hahn

In this paper, I calculate the semiparametric information bound in two dynamic panel data logit models with individual specific effects. In such a model without any other regressors, it is well known that the conditional maximum likelihood estimator yields a √n-consistent estimator. In the case where the model includes strictly exogenous continuous regressors, Honoré and Kyriazidou (2000, Econometrica 68, 839–874) suggest a consistent estimator whose rate of convergence is slower than √n. Information bounds calculated in this paper suggest that the conditional maximum likelihood estimator is not efficient for models without any other regressor and that √n-consistent estimation is infeasible in more general models.


Metrika ◽  
2019 ◽  
Vol 83 (2) ◽  
pp. 243-254
Author(s):  
Mathias Lindholm ◽  
Felix Wahl

Abstract In the present note we consider general linear models where the covariates may be both random and non-random, and where the only restrictions on the error terms are that they are independent and have finite fourth moments. For this class of models we analyse the variance parameter estimator. In particular we obtain finite sample size bounds for the variance of the variance parameter estimator which are independent of covariate information regardless of whether the covariates are random or not. For the case with random covariates this immediately yields bounds on the unconditional variance of the variance estimator—a situation which in general is analytically intractable. The situation with random covariates is illustrated in an example where a certain vector autoregressive model which appears naturally within the area of insurance mathematics is analysed. Further, the obtained bounds are sharp in the sense that both the lower and upper bound will converge to the same asymptotic limit when scaled with the sample size. By using the derived bounds it is simple to show convergence in mean square of the variance parameter estimator for both random and non-random covariates. Moreover, the derivation of the bounds for the above general linear model is based on a lemma which applies in greater generality. This is illustrated by applying the used techniques to a class of mixed effects models.


1999 ◽  
Vol 11 (2) ◽  
pp. 541-563 ◽  
Author(s):  
Anders Krogh ◽  
Søren Kamaric Riis

A general framework for hybrids of hidden Markov models (HMMs) and neural networks (NNs) called hidden neural networks (HNNs) is described. The article begins by reviewing standard HMMs and estimation by conditional maximum likelihood, which is used by the HNN. In the HNN, the usual HMM probability parameters are replaced by the outputs of state-specific neural networks. As opposed to many other hybrids, the HNN is normalized globally and therefore has a valid probabilistic interpretation. All parameters in the HNN are estimated simultaneously according to the discriminative conditional maximum likelihood criterion. The HNN can be viewed as an undirected probabilistic independence network (a graphical model), where the neural networks provide a compact representation of the clique functions. An evaluation of the HNN on the task of recognizing broad phoneme classes in the TIMIT database shows clear performance gains compared to standard HMMs tested on the same task.


Sign in / Sign up

Export Citation Format

Share Document