scholarly journals Unsupervised Learning in RSS-Based DFLT Using an EM Algorithm

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5549
Author(s):  
Ossi Kaltiokallio ◽  
Roland Hostettler ◽  
Hüseyin Yiğitler ◽  
Mikko Valkama

Received signal strength (RSS) changes of static wireless nodes can be used for device-free localization and tracking (DFLT). Most RSS-based DFLT systems require access to calibration data, either RSS measurements from a time period when the area was not occupied by people, or measurements while a person stands in known locations. Such calibration periods can be very expensive in terms of time and effort, making system deployment and maintenance challenging. This paper develops an Expectation-Maximization (EM) algorithm based on Gaussian smoothing for estimating the unknown RSS model parameters, liberating the system from supervised training and calibration periods. To fully use the EM algorithm’s potential, a novel localization-and-tracking system is presented to estimate a target’s arbitrary trajectory. To demonstrate the effectiveness of the proposed approach, it is shown that: (i) the system requires no calibration period; (ii) the EM algorithm improves the accuracy of existing DFLT methods; (iii) it is computationally very efficient; and (iv) the system outperforms a state-of-the-art adaptive DFLT system in terms of tracking accuracy.

2012 ◽  
Vol 532-533 ◽  
pp. 1445-1449
Author(s):  
Ting Ting Tong ◽  
Zhen Hua Wu

EM algorithm is a common method to solve mixed model parameters in statistical classification of remote sensing image. The EM algorithm based on fuzzification is presented in this paper to use a fuzzy set to represent each training sample. Via the weighted degree of membership, different samples will be of different effect during iteration to decrease the impact of noise on parameter learning and to increase the convergence rate of algorithm. The function and accuracy of classification of image data can be completed preferably.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Xianghui Yuan ◽  
Feng Lian ◽  
Chongzhao Han

Tracking target with coordinated turn (CT) motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT) model with known turn rate, augmented coordinated turn (ACT) model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM) framework, the algorithm based on expectation maximization (EM) algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM) algorithm, the EM algorithm shows its effectiveness.


Mathematics ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 373
Author(s):  
Branislav Panić ◽  
Jernej Klemenc ◽  
Marko Nagode

A commonly used tool for estimating the parameters of a mixture model is the Expectation–Maximization (EM) algorithm, which is an iterative procedure that can serve as a maximum-likelihood estimator. The EM algorithm has well-documented drawbacks, such as the need for good initial values and the possibility of being trapped in local optima. Nevertheless, because of its appealing properties, EM plays an important role in estimating the parameters of mixture models. To overcome these initialization problems with EM, in this paper, we propose the Rough-Enhanced-Bayes mixture estimation (REBMIX) algorithm as a more effective initialization algorithm. Three different strategies are derived for dealing with the unknown number of components in the mixture model. These strategies are thoroughly tested on artificial datasets, density–estimation datasets and image–segmentation problems and compared with state-of-the-art initialization methods for the EM. Our proposal shows promising results in terms of clustering and density-estimation performance as well as in terms of computational efficiency. All the improvements are implemented in the rebmix R package.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Natee Thong-un ◽  
Minoru K. Kurosawa

The occurrence of an overlapping signal is a significant problem in performing multiple objects localization. Doppler velocity is sensitive to the echo shape and is also able to be connected to the physical properties of moving objects, especially for a pulse compression ultrasonic signal. The expectation-maximization (EM) algorithm has the ability to achieve signal separation. Thus, applying the EM algorithm to the overlapping pulse compression signals is of interest. This paper describes a proposed method, based on the EM algorithm, of Doppler velocity estimation for overlapping linear-period-modulated (LPM) ultrasonic signals. Simulations are used to validate the proposed method.


2016 ◽  
Vol 12 (1) ◽  
pp. 65-77
Author(s):  
Michael D. Regier ◽  
Erica E. M. Moodie

Abstract We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Yupeng Li ◽  
Jianhua Zhang ◽  
Ruisi He ◽  
Lei Tian ◽  
Hewen Wei

In this paper, the Gaussian mixture model (GMM) is introduced to the channel multipath clustering. In the GMM field, the expectation-maximization (EM) algorithm is usually utilized to estimate the model parameters. However, the EM widely converges into local optimization. To address this issue, a hybrid differential evolution (DE) and EM (DE-EM) algorithms are proposed in this paper. To be specific, the DE is employed to initialize the GMM parameters. Then, the parameters are estimated with the EM algorithm. Thanks to the global searching ability of DE, the proposed hybrid DE-EM algorithm is more likely to obtain the global optimization. Simulations demonstrate that our proposed DE-EM clustering algorithm can significantly improve the clustering performance.


Author(s):  
Chandan K. Reddy ◽  
Bala Rajaratnam

In the field of statistical data mining, the Expectation Maximization (EM) algorithm is one of the most popular methods used for solving parameter estimation problems in the maximum likelihood (ML) framework. Compared to traditional methods such as steepest descent, conjugate gradient, or Newton-Raphson, which are often too complicated to use in solving these problems, EM has become a popular method because it takes advantage of some problem specific properties (Xu et al., 1996). The EM algorithm converges to the local maximum of the log-likelihood function under very general conditions (Demspter et al., 1977; Redner et al., 1984). Efficiently maximizing the likelihood by augmenting it with latent variables and guarantees of convergence are some of the important hallmarks of the EM algorithm. EM based methods have been applied successfully to solve a wide range of problems that arise in fields of pattern recognition, clustering, information retrieval, computer vision, bioinformatics (Reddy et al., 2006; Carson et al., 2002; Nigam et al., 2000), etc. Given an initial set of parameters, the EM algorithm can be implemented to compute parameter estimates that locally maximize the likelihood function of the data. In spite of its strong theoretical foundations, its wide applicability and important usage in solving some real-world problems, the standard EM algorithm suffers from certain fundamental drawbacks when used in practical settings. Some of the main difficulties of using the EM algorithm on a general log-likelihood surface are as follows (Reddy et al., 2008): • EM algorithm for mixture modeling converges to a local maximum of the log-likelihood function very quickly. • There are many other promising local optimal solutions in the close vicinity of the solutions obtained from the methods that provide good initial guesses of the solution. • Model selection criterion usually assumes that the global optimal solution of the log-likelihood function can be obtained. However, achieving this is computationally intractable. • Some regions in the search space do not contain any promising solutions. The promising and nonpromising regions co-exist and it becomes challenging to avoid wasting computational resources to search in non-promising regions. Of all the concerns mentioned above, the fact that most of the local maxima are not distributed uniformly makes it important to develop algorithms that not only help in avoiding some inefficient search over the lowlikelihood regions but also emphasize the importance of exploring promising subspaces more thoroughly (Zhang et al, 2004). This subspace search will also be useful for making the solution less sensitive to the initial set of parameters. In this chapter, we will discuss the theoretical aspects of the EM algorithm and demonstrate its use in obtaining the optimal estimates of the parameters for mixture models. We will also discuss some of the practical concerns of using the EM algorithm and present a few results on the performance of various algorithms that try to address these problems.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Lijuan Zhang ◽  
Dongming Li ◽  
Wei Su ◽  
Jinhua Yang ◽  
Yutong Jiang

To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.


2006 ◽  
pp. 57-64 ◽  
Author(s):  
A. Uribe ◽  
R. Barrera ◽  
E. Brieva

The EM algorithm is a powerful tool to solve the membership problem in open clusters when a mixture density model overlaping two heteroscedastic bivariate normal components is built to fit the cloud of relative proper motions of the stars in a region of the sky where a cluster is supposed to be. A membership study of 1866 stars located in the region of the very old open cluster M67 is carried out via the Expectation Maximization algorithm using the McLachlan, Peel, Basford and Adams EMMIX software.


1994 ◽  
Vol 6 (2) ◽  
pp. 181-214 ◽  
Author(s):  
Michael I. Jordan ◽  
Robert A. Jacobs

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.


Sign in / Sign up

Export Citation Format

Share Document