scholarly journals A New Generalized t Distribution Based on a Distribution Construction Method

Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2413
Author(s):  
Ruijie Guan ◽  
Xu Zhao ◽  
Weihu Cheng ◽  
Yaohua Rong

In this paper, a new generalized t (new Gt) distribution based on a distribution construction approach is proposed and proved to be suitable for fitting both the data with high kurtosis and heavy tail. The main innovation of this article consists of four parts. First of all, the main characteristics and properties of this new distribution are outined. Secondly, we derive the explicit expression for the moments of order statistics as well as its corresponding variance–covariance matrix. Thirdly, we focus on the parameter estimation of this new Gt distribution and introduce several estimation methods, such as a modified method of moments (MMOM), a maximum likelihood estimation (MLE) using the EM algorithm, a novel iterative algorithm to acquire MLE, and improved probability weighted moments (IPWM). Through simulation studies, it can be concluded that the IPWM estimation performs better than the MLE using the EM algorithm and the MMOM in general. The newly-proposed iterative algorithm has better performance than the EM algorithm when the sample kurtosis is greater than 2.7. For four parameters of the new Gt distribution, a profile maximum likelihood approach using the EM algorithm is developed to deal with the estimation problem and obtain acceptable.

1995 ◽  
Vol 12 (5) ◽  
pp. 515-527 ◽  
Author(s):  
Jeanine J. Houwing-Duistermaat ◽  
Lodewijk A. Sandkuijl ◽  
Arthur A. B. Bergen ◽  
Hans C. van Houwelingen

2021 ◽  
Author(s):  
Masahiro Kuroda

Mixture models become increasingly popular due to their modeling flexibility and are applied to the clustering and classification of heterogeneous data. The EM algorithm is largely used for the maximum likelihood estimation of mixture models because the algorithm is stable in convergence and simple in implementation. Despite such advantages, it is pointed out that the EM algorithm is local and has slow convergence as the main drawback. To avoid the local convergence of the EM algorithm, multiple runs from several different initial values are usually used. Then the algorithm may take a large number of iterations and long computation time to find the maximum likelihood estimates. The speedup of computation of the EM algorithm is available for these problems. We give the algorithms to accelerate the convergence of the EM algorithm and apply them to mixture model estimation. Numerical experiments examine the performance of the acceleration algorithms in terms of the number of iterations and computation time.


Psych ◽  
2020 ◽  
Vol 2 (4) ◽  
pp. 209-252
Author(s):  
Marie Beisemann ◽  
Ortrud Wartlick ◽  
Philipp Doebler

The expectation–maximization (EM) algorithm is an important numerical method for maximum likelihood estimation in incomplete data problems. However, convergence of the EM algorithm can be slow, and for this reason, many EM acceleration techniques have been proposed. After a review of acceleration techniques in a unified notation with illustrations, three recently proposed EM acceleration techniques are compared in detail: quasi-Newton methods (QN), “squared” iterative methods (SQUAREM), and parabolic EM (PEM). These acceleration techniques are applied to marginal maximum likelihood estimation with the EM algorithm in one- and two-parameter logistic item response theory (IRT) models for binary data, and their performance is compared. QN and SQUAREM methods accelerate convergence of the EM algorithm for the two-parameter logistic model significantly in high-dimensional data problems. Compared to the standard EM, all three methods reduce the number of iterations, but increase the number of total marginal log-likelihood evaluations per iteration. Efficient approximations of the marginal log-likelihood are hence an important part of implementation.


Sign in / Sign up

Export Citation Format

Share Document