Normalized scoring of Hidden Markov Models by on-line learning and its application to gesture-sequence perception

Author(s):  
Mio Nishiyama ◽  
Tadashi Shibata
1994 ◽  
Vol 6 (2) ◽  
pp. 307-318 ◽  
Author(s):  
Pierre Baldi ◽  
Yves Chauvin

A simple learning algorithm for Hidden Markov Models (HMMs) is presented together with a number of variations. Unlike other classical algorithms such as the Baum-Welch algorithm, the algorithms described are smooth and can be used on-line (after each example presentation) or in batch mode, with or without the usual Viterbi most likely path approximation. The algorithms have simple expressions that result from using a normalized-exponential representation for the HMM parameters. All the algorithms presented are proved to be exact or approximate gradient optimization algorithms with respect to likelihood, log-likelihood, or cross-entropy functions, and as such are usually convergent. These algorithms can also be casted in the more general EM (Expectation-Maximization) framework where they can be viewed as exact or approximate GEM (Generalized Expectation-Maximization) algorithms. The mathematical properties of the algorithms are derived in the appendix.


Biometrics ◽  
2013 ◽  
Vol 69 (3) ◽  
pp. 703-713 ◽  
Author(s):  
D. L. Borchers ◽  
W. Zucchini ◽  
M. P. Heide-Jørgensen ◽  
A. Cañadas ◽  
R. Langrock

Sign in / Sign up

Export Citation Format

Share Document