An Expectation-Maximization Algorithm for Blind Separation of Noisy Mixtures Using Gaussian Mixture Model

2016 ◽  
Vol 36 (7) ◽  
pp. 2697-2726 ◽  
Author(s):  
Fanglin Gu ◽  
Hang Zhang ◽  
Wenwu Wang ◽  
Shan Wang
2021 ◽  
Vol 87 (9) ◽  
pp. 615-630
Author(s):  
Longjie Ye ◽  
Ka Zhang ◽  
Wen Xiao ◽  
Yehua Sheng ◽  
Dong Su ◽  
...  

This paper proposes a Gaussian mixture model of a ground filtering method based on hierarchical curvature constraints. Firstly, the thin plate spline function is iteratively applied to interpolate the reference surface. Secondly, gradually changing grid size and curvature threshold are used to construct hierarchical constraints. Finally, an adaptive height difference classifier based on the Gaussian mixture model is proposed. Using the latent variables obtained by the expectation-maximization algorithm, the posterior probability of each point is computed. As a result, ground and objects can be marked separately according to the calculated possibility. 15 data samples provided by the International Society for Photogrammetry and Remote Sensing are used to verify the proposed method, which is also compared with eight classical filtering algorithms. Experimental results demonstrate that the average total errors and average Cohen's kappa coefficient of the proposed method are 6.91% and 80.9%, respectively. In general, it has better performance in areas with terrain discontinuities and bridges.


2005 ◽  
Vol 128 (3) ◽  
pp. 479-483
Author(s):  
Hani Hamdan ◽  
Gérard Govaert

In this paper, we present a new and original mixture model approach for acoustic emission (AE) data clustering. AE techniques have been used in a variety of applications in industrial plants. These techniques can provide the most sophisticated monitoring test and can generally be done with the plant/pressure equipment operating at several conditions. Since the AE clusters may present several constraints (different proportions, volumes, orientations, and shapes), we propose to base the AE cluster analysis on Gaussian mixture models, which will be, in such situations, a powerful approach. Furthermore, the diagonal Gaussian mixture model seems to be well adapted to the detection and monitoring of defect classes since the weldings of cylindrical pressure equipment are lengthened horizontally and vertically (cluster shapes lengthened along the axes). The EM (Expectation-Maximization) algorithm applied to a diagonal Gaussian mixture model provides a satisfactory solution but the real time constraints imposed in our problem make the application of this algorithm impossible if the number of points becomes too big. The solution that we propose is to use the CEM (Classification Expectation-Maximization) algorithm, which converges faster and generates comparable solutions in terms of resulting partition. The practical results on real data are very satisfactory from the experts point of view.


Author(s):  
Gustav Zickert ◽  
Can Evren Yarman

AbstractWe propose a greedy variational method for decomposing a non-negative multivariate signal as a weighted sum of Gaussians, which, borrowing the terminology from statistics, we refer to as a Gaussian mixture model. Notably, our method has the following features: (1) It accepts multivariate signals, i.e., sampled multivariate functions, histograms, time series, images, etc., as input. (2) The method can handle general (i.e., ellipsoidal) Gaussians. (3) No prior assumption on the number of mixture components is needed. To the best of our knowledge, no previous method for Gaussian mixture model decomposition simultaneously enjoys all these features. We also prove an upper bound, which cannot be improved by a global constant, for the distance from any mode of a Gaussian mixture model to the set of corresponding means. For mixtures of spherical Gaussians with common variance $$\sigma ^2$$ σ 2 , the bound takes the simple form $$\sqrt{n}\sigma $$ n σ . We evaluate our method on one- and two-dimensional signals. Finally, we discuss the relation between clustering and signal decomposition, and compare our method to the baseline expectation maximization algorithm.


Sign in / Sign up

Export Citation Format

Share Document