penalized maximum likelihood
Recently Published Documents


TOTAL DOCUMENTS

117
(FIVE YEARS 20)

H-INDEX

20
(FIVE YEARS 0)

2021 ◽  
Vol 7 (4) ◽  
pp. 776-787
Author(s):  
Weisan Wu ◽  
Xinyu Yang

Skew-Laplace-Normal Mixture models is a more flexible framework than the normal mixture models for heterogeneous data with asymmetric behaviors. But it’s likelihood function have some bad math properties, such as the unboundedness of likelihood function and the divergency of skewness parameter, it often mislead statistic inference. In this paper, we given a penalizing the likelihood function method to deal with these problem simultaneously, and we given the detail of proof on parameter have strongly consistency. We also give a modified penalized EM-type algorithms to compute penalized estimators.


2021 ◽  
Vol 12 ◽  
Author(s):  
Oliver Lüdtke ◽  
Esther Ulitzsch ◽  
Alexander Robitzsch

With small to modest sample sizes and complex models, maximum likelihood (ML) estimation of confirmatory factor analysis (CFA) models can show serious estimation problems such as non-convergence or parameter estimates outside the admissible parameter space. In this article, we distinguish different Bayesian estimators that can be used to stabilize the parameter estimates of a CFA: the mode of the joint posterior distribution that is obtained from penalized maximum likelihood (PML) estimation, and the mean (EAP), median (Med), or mode (MAP) of the marginal posterior distribution that are calculated by using Markov Chain Monte Carlo (MCMC) methods. In two simulation studies, we evaluated the performance of the Bayesian estimators from a frequentist point of view. The results show that the EAP produced more accurate estimates of the latent correlation in many conditions and outperformed the other Bayesian estimators in terms of root mean squared error (RMSE). We also argue that it is often advantageous to choose a parameterization in which the main parameters of interest are bounded, and we suggest the four-parameter beta distribution as a prior distribution for loadings and correlations. Using simulated data, we show that selecting weakly informative four-parameter beta priors can further stabilize parameter estimates, even in cases when the priors were mildly misspecified. Finally, we derive recommendations and propose directions for further research.


Author(s):  
Liam F. Beiser-McGrath

Abstract When separation is a problem in binary dependent variable models, many researchers use Firth's penalized maximum likelihood in order to obtain finite estimates (Firth, 1993; Zorn, 2005; Rainey, 2016). In this paper, I show that this approach can lead to inferences in the opposite direction of the separation when the number of observations are sufficiently large and both the dependent and independent variables are rare events. As large datasets with rare events are frequently used in political science, such as dyadic data measuring interstate relations, a lack of awareness of this problem may lead to inferential issues. Simulations and an empirical illustration show that the use of independent “weakly-informative” prior distributions centered at zero, for example, the Cauchy prior suggested by Gelman et al. (2008), can avoid this issue. More generally, the results caution researchers to be aware of how the choice of prior interacts with the structure of their data, when estimating models in the presence of separation.


Genes ◽  
2020 ◽  
Vol 11 (11) ◽  
pp. 1286
Author(s):  
Wenlong Ren ◽  
Zhikai Liang ◽  
Shu He ◽  
Jing Xiao

In genome-wide association studies, linear mixed models (LMMs) have been widely used to explore the molecular mechanism of complex traits. However, typical association approaches suffer from several important drawbacks: estimation of variance components in LMMs with large scale individuals is computationally slow; single-locus model is unsatisfactory to handle complex confounding and causes loss of statistical power. To address these issues, we propose an efficient two-stage method based on hybrid of restricted and penalized maximum likelihood, named HRePML. Firstly, we performed restricted maximum likelihood (REML) on single-locus LMM to remove unrelated markers, where spectral decomposition on covariance matrix was used to fast estimate variance components. Secondly, we carried out penalized maximum likelihood (PML) on multi-locus LMM for markers with reasonably large effects. To validate the effectiveness of HRePML, we conducted a series of simulation studies and real data analyses. As a result, our method always had the highest average statistical power compared with multi-locus mixed-model (MLMM), fixed and random model circulating probability unification (FarmCPU), and genome-wide efficient mixed model association (GEMMA). More importantly, HRePML can provide higher accuracy estimation of marker effects. HRePML also identifies 41 previous reported genes associated with development traits in Arabidopsis, which is more than was detected by the other methods.


2020 ◽  
Author(s):  
Oliver Lüdtke ◽  
Esther Ulitzsch ◽  
Alexander Robitzsch

With small to modest sample sizes and complex models, maximum likelihood (ML) estimation of confirmatory factor analysis (CFA) models can show serious estimation problems such as nonconvergence or parameter estimates that are outside the admissible parameter space. In the present article, we discuss two Bayesian estimation methods for stabilizing parameter estimates of a CFA: Penalized maximum likelihood (PML) estimation and Markov Chain Monte Carlo (MCMC) methods. We clarify that these use different Bayesian point estimates from the joint posterior distribution—the mode (PML) of the joint posterior distribution, and the mean (EAP) or mode (MAP) of the marginal posterior distribution—and discuss under which conditions the two methods produce different results. In a simulation study, we show that the MCMC method clearly outperforms PML and that these performance gains can be explained by the fact that MCMC uses the EAP as a point estimate. We also argue that it is often advantageous to choose a parameterization in which the main parameters of interest are bounded and suggest the four-parameter beta distribution as a prior distribution for loadings and correlations. Using simulated data, we show that selecting weakly informative four-parameter beta priors can further stabilize parameter estimates, even in cases when the priors were mildly misspecified. Finally, we derive recommendations and propose directions for further research.


Sign in / Sign up

Export Citation Format

Share Document