Bayesian k-Means as a “Maximization-Expectation” Algorithm

2009 ◽  
Vol 21 (4) ◽  
pp. 1145-1172 ◽  
Author(s):  
Kenichi Kurihara ◽  
Max Welling

We introduce a new class of “maximization-expectation” (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.

2020 ◽  
Vol 39 (2) ◽  
pp. 464-471
Author(s):  
J.A. Adeyiga ◽  
S.O. Olabiyisi ◽  
E.O. Omidiora

Several criminal profiling systems have been developed to assist the Law Enforcement Agencies in solving crimes but the techniques employed in most of the systems lack the ability to cluster criminal based on their behavioral characteristics. This paper reviewed different clustering techniques used in criminal profiling and then selects one fuzzy clustering algorithm (Expectation Maximization) and two hard clustering algorithm (K-means and Hierarchical). The selected algorithms were then developed and tested on real life data to produce "profiles" of criminal activity and behavior of criminals. The algorithms were implemented using WEKA software package. The performance of the algorithms was evaluated using cluster accuracy and time complexity. The results show that Expectation Maximization algorithm gave a 90.5% clusters accuracy in 8.5s, while K-Means had 62.6% in 0.09s and Hierarchical with 51.9% in 0.11s. In conclusion, soft clustering algorithm performs better than hard clustering algorithm in analyzing criminal data. Keywords: Clustering Algorithm, Profiling, Crime, Membership value


2019 ◽  
Author(s):  
Daniel Tward ◽  
Michael Miller

AbstractWe examine the problem of mapping dense 3D atlases onto censored, sparsely sampled 2D target sections at micron and meso scales. We introduce a new class of large deformation diffeomorphic metric mapping (LD-DMM) algorithms for generating dense atlas correspondences onto sparse 2D samples by introducing a field of hidden variables which must be estimated representing a large class of target image uncertainties including (i) unknown parameters representing cross stain contrasts, (ii) censoring of tissue due to localized measurements of target subvolumes and (iii) sparse sampling of target tissue sections. For prediction of the hidden fields we introduce the generalized expectation-maximization algorithm (EM) for which the E-step calculates the conditional mean of the hidden variates simultaneously combined with the diffeomorphic correspondences between atlas and target coordinate systems. The algorithm is run to fixed points guaranteeing estimators satisfy the necessary maximizer conditions when interpreted as likelihood estimators. The dense mapping is an injective correspondence to the sparse targets implying all of the 3D variations are performed only on the atlas side with variation in the targets only 2D manipulations.


2020 ◽  
Vol 57 (4) ◽  
pp. 1260-1275
Author(s):  
Celeste R. Pavithra ◽  
T. G. Deepak

AbstractWe introduce a multivariate class of distributions with support I, a k-orthotope in $[0,\infty)^{k}$ , which is dense in the set of all k-dimensional distributions with support I. We call this new class ‘multivariate finite-support phase-type distributions’ (MFSPH). Though we generally define MFSPH distributions on any finite k-orthotope in $[0,\infty)^{k}$ , here we mainly deal with MFSPH distributions with support $[0,1)^{k}$ . The distribution function of an MFSPH variate is computed by using that of a variate in the MPH $^{*} $ class, the multivariate class of distributions introduced by Kulkarni (1989). The marginal distributions of MFSPH variates are found as FSPH distributions, the class studied by Ramaswami and Viswanath (2014). Some properties, including the mixture property, of MFSPH distributions are established. Estimates of the parameters of a particular class of bivariate finite-support phase-type distributions are found by using the expectation-maximization algorithm. Simulated samples are used to demonstrate how this class could be used as approximations for bivariate finite-support distributions.


2005 ◽  
Vol 25 (1_suppl) ◽  
pp. S678-S678
Author(s):  
Yasuhiro Akazawa ◽  
Yasuhiro Katsura ◽  
Ryohei Matsuura ◽  
Piao Rishu ◽  
Ansar M D Ashik ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document