scholarly journals Nonlinear random effects mixture models: Maximum likelihood estimation via the EM algorithm

2007 ◽  
Vol 51 (12) ◽  
pp. 6614-6623 ◽  
Author(s):  
Xiaoning Wang ◽  
Alan Schumitzky ◽  
David Z. D’Argenio
1995 ◽  
Vol 12 (5) ◽  
pp. 515-527 ◽  
Author(s):  
Jeanine J. Houwing-Duistermaat ◽  
Lodewijk A. Sandkuijl ◽  
Arthur A. B. Bergen ◽  
Hans C. van Houwelingen

2021 ◽  
Author(s):  
Masahiro Kuroda

Mixture models become increasingly popular due to their modeling flexibility and are applied to the clustering and classification of heterogeneous data. The EM algorithm is largely used for the maximum likelihood estimation of mixture models because the algorithm is stable in convergence and simple in implementation. Despite such advantages, it is pointed out that the EM algorithm is local and has slow convergence as the main drawback. To avoid the local convergence of the EM algorithm, multiple runs from several different initial values are usually used. Then the algorithm may take a large number of iterations and long computation time to find the maximum likelihood estimates. The speedup of computation of the EM algorithm is available for these problems. We give the algorithms to accelerate the convergence of the EM algorithm and apply them to mixture model estimation. Numerical experiments examine the performance of the acceleration algorithms in terms of the number of iterations and computation time.


2009 ◽  
Vol 02 (01) ◽  
pp. 9-17
Author(s):  
HONGJIE WEI ◽  
WENZHUAN ZHANG

Longitudinal continuous proportional data is common in many fields such as biomedical research, psychological research and so on. As shown in [16], such data can be fitted with simplex models. Based on the original models of [16] which assumed a fixed effect for every subject, this paper extends the models by adding random effects and proposes simplex distribution nonlinear mixed models which are one kind of nonlinear reproductive dispersion mixed models. By treating the random effects in the models as hypothetical missing data and applying Metropolis–Hastings (M–H) algorithm, this paper develops an EM algorithm with Markov chain Monte–Carlo method for maximum likelihood estimation in the models. The method is illustrated with the same data from an ophthalmology study on the use of intraocular gas in retinal surgeries in [16] for ease of comparison.


Psych ◽  
2020 ◽  
Vol 2 (4) ◽  
pp. 209-252
Author(s):  
Marie Beisemann ◽  
Ortrud Wartlick ◽  
Philipp Doebler

The expectation–maximization (EM) algorithm is an important numerical method for maximum likelihood estimation in incomplete data problems. However, convergence of the EM algorithm can be slow, and for this reason, many EM acceleration techniques have been proposed. After a review of acceleration techniques in a unified notation with illustrations, three recently proposed EM acceleration techniques are compared in detail: quasi-Newton methods (QN), “squared” iterative methods (SQUAREM), and parabolic EM (PEM). These acceleration techniques are applied to marginal maximum likelihood estimation with the EM algorithm in one- and two-parameter logistic item response theory (IRT) models for binary data, and their performance is compared. QN and SQUAREM methods accelerate convergence of the EM algorithm for the two-parameter logistic model significantly in high-dimensional data problems. Compared to the standard EM, all three methods reduce the number of iterations, but increase the number of total marginal log-likelihood evaluations per iteration. Efficient approximations of the marginal log-likelihood are hence an important part of implementation.


Sign in / Sign up

Export Citation Format

Share Document