mixture models
Recently Published Documents


TOTAL DOCUMENTS

3187
(FIVE YEARS 586)

H-INDEX

94
(FIVE YEARS 11)

2021 ◽  
Author(s):  
Tiago Dias Domingues ◽  
Helena Mourino ◽  
Nuno Sepulveda

In this work will apply mixture models based on distributions from the SMSN family to antibody data against four SARS-CoV-2 virus antigens. Furthermore, since the true infection status of individuals is known a priori, performance measures will be calculated for the methods proposed for cutoff point estimation such as sensitivity, specificity and accuracy. The results of a simulation study will also be presented.


2021 ◽  
pp. 0193841X2110656
Author(s):  
Zachary K. Collier ◽  
Haobai Zhang ◽  
Bridgette Johnson

Background Finite mixture models cluster individuals into latent subgroups based on observed traits. However, inaccurate enumeration of clusters can have lasting implications on policy decisions and allocations of resources. Applied and methodological researchers accept no obvious best model fit statistic, and different measures could suggest different numbers of latent clusters. Objectives The purpose of this article is to evaluate and compare different cluster enumeration techniques. Research Design Study I demonstrates how recently proposed resampling methods result in no precise number of clusters on which all fit statistics agree. We recommend the pre-processing method in Study II as an alternative. Both studies used nationally representative data on working memory, cognitive flexibility, and inhibitory control. Conclusions The data plus priors method shows promise to address inconsistencies among fit measures and help applied researchers using finite mixture models in the future.


2021 ◽  
Vol 32 (1) ◽  
Author(s):  
Lena Sembach ◽  
Jan Pablo Burgard ◽  
Volker Schulz

AbstractGaussian Mixture Models are a powerful tool in Data Science and Statistics that are mainly used for clustering and density approximation. The task of estimating the model parameters is in practice often solved by the expectation maximization (EM) algorithm which has its benefits in its simplicity and low per-iteration costs. However, the EM converges slowly if there is a large share of hidden information or overlapping clusters. Recent advances in Manifold Optimization for Gaussian Mixture Models have gained increasing interest. We introduce an explicit formula for the Riemannian Hessian for Gaussian Mixture Models. On top, we propose a new Riemannian Newton Trust-Region method which outperforms current approaches both in terms of runtime and number of iterations. We apply our method on clustering problems and density approximation tasks. Our method is very powerful for data with a large share of hidden information compared to existing methods.


2021 ◽  
Author(s):  
Ana Dodik ◽  
Marios Papas ◽  
Cengiz Öztireli ◽  
Thomas Müller
Keyword(s):  

2021 ◽  
Author(s):  
Masahiro Kuroda

Mixture models become increasingly popular due to their modeling flexibility and are applied to the clustering and classification of heterogeneous data. The EM algorithm is largely used for the maximum likelihood estimation of mixture models because the algorithm is stable in convergence and simple in implementation. Despite such advantages, it is pointed out that the EM algorithm is local and has slow convergence as the main drawback. To avoid the local convergence of the EM algorithm, multiple runs from several different initial values are usually used. Then the algorithm may take a large number of iterations and long computation time to find the maximum likelihood estimates. The speedup of computation of the EM algorithm is available for these problems. We give the algorithms to accelerate the convergence of the EM algorithm and apply them to mixture model estimation. Numerical experiments examine the performance of the acceleration algorithms in terms of the number of iterations and computation time.


Sign in / Sign up

Export Citation Format

Share Document