On the Robustness of LISREL (Maximum Likelihood Estimation) Against Small Sample Size and Non-normality.

1984 ◽  
Vol 79 (386) ◽  
pp. 480 ◽  
Author(s):  
Robert M. Pruzek ◽  
Anne Boomsma
1987 ◽  
Vol 1 (3) ◽  
pp. 349-366
Author(s):  
Jaxk H. Reeves ◽  
Ashim Mallik ◽  
William P. McCormick

A sequential procedure to select optimal prices based on maximum likelihood estimation is considered. Asymptotic properties of the pricing scheme and the concommitant estimation problem are examined. For small sample sizes, simulation results show that the proposed procedure has high efficiency relative to the best procedure when the parameter is known.


1996 ◽  
Vol 12 (1) ◽  
pp. 1-29 ◽  
Author(s):  
Richard A. Davis ◽  
William T.M. Dunsmuir

This paper considers maximum likelihood estimation for the moving average parameter θ in an MA(1) model when θ is equal to or close to 1. A derivation of the limit distribution of the estimate θLM, defined as the largest of the local maximizers of the likelihood, is given here for the first time. The theory presented covers, in a unified way, cases where the true parameter is strictly inside the unit circle as well as the noninvertible case where it is on the unit circle. The asymptotic distribution of the maximum likelihood estimator subMLE is also described and shown to differ, but only slightly, from that of θLM. Of practical significance is the fact that the asymptotic distribution for either estimate is surprisingly accurate even for small sample sizes and for values of the moving average parameter considerably far from the unit circle.


2020 ◽  
Vol 29 (11) ◽  
pp. 3166-3178 ◽  
Author(s):  
Ben Van Calster ◽  
Maarten van Smeden ◽  
Bavo De Cock ◽  
Ewout W Steyerberg

When developing risk prediction models on datasets with limited sample size, shrinkage methods are recommended. Earlier studies showed that shrinkage results in better predictive performance on average. This simulation study aimed to investigate the variability of regression shrinkage on predictive performance for a binary outcome. We compared standard maximum likelihood with the following shrinkage methods: uniform shrinkage (likelihood-based and bootstrap-based), penalized maximum likelihood (ridge) methods, LASSO logistic regression, adaptive LASSO, and Firth’s correction. In the simulation study, we varied the number of predictors and their strength, the correlation between predictors, the event rate of the outcome, and the events per variable. In terms of results, we focused on the calibration slope. The slope indicates whether risk predictions are too extreme (slope < 1) or not extreme enough (slope > 1). The results can be summarized into three main findings. First, shrinkage improved calibration slopes on average. Second, the between-sample variability of calibration slopes was often increased relative to maximum likelihood. In contrast to other shrinkage approaches, Firth’s correction had a small shrinkage effect but showed low variability. Third, the correlation between the estimated shrinkage and the optimal shrinkage to remove overfitting was typically negative, with Firth’s correction as the exception. We conclude that, despite improved performance on average, shrinkage often worked poorly in individual datasets, in particular when it was most needed. The results imply that shrinkage methods do not solve problems associated with small sample size or low number of events per variable.


1987 ◽  
Vol 12 (4) ◽  
pp. 369-381 ◽  
Author(s):  
Kathy E. Green ◽  
Richard M. Smith

This paper compares two methods of estimating component difficulties for dichotomous test data. Simulated data are used to study the effects of sample size, collinearity, a measurement disturbance, and multidimensionality on the estimation of component difficulties. The two methods of estimation used in this study were conditional maximum likelihood estimation of parameters specified by the linear logistic test model (LLTM) and estimated Rasch item difficulties regressed on component frequencies. The results of the analysis indicate that both methods produce similar results in all comparisons. Neither of the methods worked well in the presence of an incorrectly specified structure or collinearity in the component frequencies. However, both methods appear to be fairly robust in the presence of measurement disturbances as long as there is a large number of cases (n = 1,000). For the case of fitting data with uncorrelated component frequencies, 30 cases were sufficient to recover the generating parameters accurately.


2016 ◽  
Vol 21 (1) ◽  
pp. 127-135 ◽  
Author(s):  
F. A. Nava ◽  
V. H. Márquez-Ramírez ◽  
F. R. Zúñiga ◽  
L. Ávila-Barrientos ◽  
C. B. Quinteros

Sign in / Sign up

Export Citation Format

Share Document