Finite-dimensional models for hidden Markov chains

1995 ◽  
Vol 27 (01) ◽  
pp. 146-160
Author(s):  
Lakhdar Aggoun ◽  
Robert J. Elliott

A continuous-time, non-linear filtering problem is considered in which both signal and observation processes are Markov chains. New finite-dimensional filters and smoothers are obtained for the state of the signal, for the number of jumps from one state to another, for the occupation time in any state of the signal, and for joint occupation times of the two processes. These estimates are then used in the expectation maximization algorithm to improve the parameters in the model. Consequently, our filters and model are adaptive, or self-tuning.

1995 ◽  
Vol 27 (1) ◽  
pp. 146-160 ◽  
Author(s):  
Lakhdar Aggoun ◽  
Robert J. Elliott

A continuous-time, non-linear filtering problem is considered in which both signal and observation processes are Markov chains. New finite-dimensional filters and smoothers are obtained for the state of the signal, for the number of jumps from one state to another, for the occupation time in any state of the signal, and for joint occupation times of the two processes. These estimates are then used in the expectation maximization algorithm to improve the parameters in the model. Consequently, our filters and model are adaptive, or self-tuning.


2020 ◽  
Vol 26 (2) ◽  
pp. 113-129
Author(s):  
Hamza M. Ruzayqat ◽  
Ajay Jasra

AbstractIn the following article, we consider the non-linear filtering problem in continuous time and in particular the solution to Zakai’s equation or the normalizing constant. We develop a methodology to produce finite variance, almost surely unbiased estimators of the solution to Zakai’s equation. That is, given access to only a first-order discretization of solution to the Zakai equation, we present a method which can remove this discretization bias. The approach, under assumptions, is proved to have finite variance and is numerically compared to using a particular multilevel Monte Carlo method.


1987 ◽  
Vol 1 (3) ◽  
pp. 251-264 ◽  
Author(s):  
Sheldon M. Ross

In this paper we propose a new approach for estimating the transition probabilities and mean occupation times of continuous-time Markov chains. Our approach is to approximate the probability of being in a state (or the mean time already spent in a state) at time t by the probability of being in that state (or the mean time already spent in that state) at a random time that is gamma distributed with mean t.


1988 ◽  
Vol 2 (2) ◽  
pp. 267-268
Author(s):  
Sheldon M. Ross

In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.


1988 ◽  
Vol 2 (4) ◽  
pp. 471-474 ◽  
Author(s):  
Nico M. van Dijk

Recently, Ross [1] proposed an elegant method of approximating transition probabilities and mean occupation times in continuous-time Markov chains based upon recursively inspecting the process at exponential times. The method turned out to be amazingly efficient for the examples investigated. However, no formal rough error bound was provided. Any error bound even though robust is of practical interest in engineering (e.g., for determining truncation criteria or setting up an experiment). This note primarily aims to show that by a simple and standard comparison relation a rough error bound of the method is secured. Also, some alternative approximations are inspected.


1991 ◽  
Vol 334 (1271) ◽  
pp. 357-384 ◽  

Techniques for characterizing very small single-channel currents buried in background noise are described and tested on simulated data to give confidence when applied to real data. Single channel currents are represented as a discrete-time, finite-state, homogeneous, Markov process, and the noise that obscures the signal is assumed to be white and Gaussian. The various signal model parameters, such as the Markov state levels and transition probabilities, are unknown. In addition to white Gaussian noise the signal can be corrupted by deterministic interferences of known form but unknown parameters, such as the sinusoidal disturbance stemming from AC interference and a drift of the base line owing to a slow development of liquid-junction potentials. To characterize the signal buried in such stochastic and deterministic interferences, the problem is first formulated in the framework of a Hidden Markov Model and then the Expectation Maximization algorithm is applied to obtain the maximum likelihood estimates of the model parameters (state levels, transition probabilities), signals, and the parameters of the deterministic disturbances. . Using fictitious channel currents embedded in the idealized noise, we first show that the signal processing technique is capable of characterizing the signal characteristics quite accurately even when the amplitude of currents is as small as 5-10 fA. The statistics of the signal estimated from the processing technique include the amplitude, mean open and closed duration, open-time and closed-time histograms, probability of dwell-time and the transition probability matrix. With a periodic interference composed, for example, of 50 Hz and 100 Hz components, or a linear drift of the baseline added to the segment containing channel currents and white noise, the parameters of the deterministic interference, such as the amplitude and phase of the sinusoidal wave, or the rate of linear drift, as well as all the relevant statistics of the signal, are accurately estimated with the algorithm we propose. Also, if the frequencies of the periodic interference are unknown, they can be accurately estimated. Finally, we provide a technique by which channel currents originating from the sum of two or more independent single channels are decomposed so that each process can be separately characterized. This process is also formulated as a Hidden Markov Model problem and solved by applying the Expectation Maximization algorithm. The scheme relies on the fact that the transition matrix of the summed Markov process can be construed as a tensor product of the transition matrices of individual processes.


Sign in / Sign up

Export Citation Format

Share Document