Robust $$L_{2}E$$ Parameter Estimation of Gaussian Mixture Models: Comparison with Expectation Maximization

Author(s):  
Umashanger Thayasivam ◽  
Chinthaka Kuruwita ◽  
Ravi P. Ramachandran
2011 ◽  
Vol 23 (6) ◽  
pp. 1605-1622 ◽  
Author(s):  
Lingyan Ruan ◽  
Ming Yuan ◽  
Hui Zou

Finite gaussian mixture models are widely used in statistics thanks to their great flexibility. However, parameter estimation for gaussian mixture models with high dimensionality can be challenging because of the large number of parameters that need to be estimated. In this letter, we propose a penalized likelihood estimator to address this difficulty. The [Formula: see text]-type penalty we impose on the inverse covariance matrices encourages sparsity on its entries and therefore helps to reduce the effective dimensionality of the problem. We show that the proposed estimate can be efficiently computed using an expectation-maximization algorithm. To illustrate the practical merits of the proposed method, we consider its applications in model-based clustering and mixture discriminant analysis. Numerical experiments with both simulated and real data show that the new method is a valuable tool for high-dimensional data analysis.


2003 ◽  
Vol 15 (2) ◽  
pp. 469-485 ◽  
Author(s):  
J. J. Verbeek ◽  
N. Vlassis ◽  
B. Kröse

This article concerns the greedy learning of gaussian mixtures. In the greedy approach, mixture components are inserted into the mixture one aftertheother.We propose a heuristic for searching for the optimal component to insert. In a randomized manner, a set of candidate new components is generated. For each of these candidates, we find the locally optimal new component and insert it into the existing mixture. The resulting algorithm resolves the sensitivity to initialization of state-of-the-art methods, like expectation maximization, and has running time linear in the number of data points and quadratic in the (final) number of mixture components. Due to its greedy nature, the algorithm can be particularly useful when the optimal number of mixture components is unknown. Experimental results comparing the proposed algorithm to other methods on density estimation and texture segmentation are provided.


2016 ◽  
Vol 24 (2) ◽  
pp. 293-317 ◽  
Author(s):  
Thiago Ferreira Covões ◽  
Eduardo Raul Hruschka ◽  
Joydeep Ghosh

This paper describes the evolutionary split and merge for expectation maximization (ESM-EM) algorithm and eight of its variants, which are based on the use of split and merge operations to evolve Gaussian mixture models. Asymptotic time complexity analysis shows that the proposed algorithms are competitive with the state-of-the-art genetic-based expectation maximization (GA-EM) algorithm. Experiments performed in 35 data sets showed that ESM-EM can be computationally more efficient than the widely used multiple runs of EM (for different numbers of components and initializations). Moreover, a variant of ESM-EM free from critical parameters was shown to be able to provide competitive results with GA-EM, even when GA-EM parameters were fine-tuned a priori.


Sign in / Sign up

Export Citation Format

Share Document