scholarly journals Observable Operator Models

2016 ◽  
Vol 36 (1) ◽  
Author(s):  
Ilona Spanczér

This paper describes a new approach to model discrete stochastic processes, called observable operator models (OOMs). The OOMs were introduced by Jaeger as a generalization of hidden Markov models (HMMs). The theory of OOMs makes use of both probabilistic and linear algebraic tools, which has an important advantage: using the tools of linear algebra a very simple and efficient learning algorithm can be developed for OOMs. This seems to be better than the known algorithms for HMMs. This learningalgorithm is presented in detail in the second part of the article.

2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Yanxue Zhang ◽  
Dongmei Zhao ◽  
Jinxing Liu

The biggest difficulty of hidden Markov model applied to multistep attack is the determination of observations. Now the research of the determination of observations is still lacking, and it shows a certain degree of subjectivity. In this regard, we integrate the attack intentions and hidden Markov model (HMM) and support a method to forecasting multistep attack based on hidden Markov model. Firstly, we train the existing hidden Markov model(s) by the Baum-Welch algorithm of HMM. Then we recognize the alert belonging to attack scenarios with the Forward algorithm of HMM. Finally, we forecast the next possible attack sequence with the Viterbi algorithm of HMM. The results of simulation experiments show that the hidden Markov models which have been trained are better than the untrained in recognition and prediction.


Author(s):  
Sarah Creer ◽  
Phil Green ◽  
Stuart Cunningham ◽  
Junichi Yamagishi

For an individual with a speech impairment, it can be necessary for them to use a device to produce synthesized speech to assist their communication. To fully support all functions of human speech communication: communication of information, maintenance of social relationships and displaying identity, the voice must be intelligible and natural-sounding. Ideally, it must also be capable of conveying the speaker’s vocal identity. A new approach based on Hidden Markov models (HMMs) has been proposed as a way of capturing sufficient information about an individual’s speech to enable a personalized speech synthesizer to be developed. This approach adapts a statistical model of speech towards the vocal characteristics of an individual. This chapter describes this approach and how it can be implemented using the HTS toolkit. Results are reported from a study that built personalized synthetic voices for two individuals with dysarthria. An evaluation of the voices by the participants themselves suggests that this technique shows promise for building personalized voices for individuals with progressive dysarthria even when their speech has begun to deteriorate.


Author(s):  
Intan Nurma Yulita Houw Liong The ◽  
◽  
Adiwijaya ◽  

Indonesia has many tribes, so that there are many dialects. Speech classification is difficult if the database uses speech signals from various people who have different characteristics because of gender and dialect. The different characteristics will influence frequency, intonation, amplitude, and period of the speech. It makes the system must be trained for the various templates reference of speech signal. Therefore, this study has been developed for Indonesian speech classification. The solution is a new combination of fuzzy on hidden Markov models. The result shows a new version of fuzzy hiddenMarkovmodels is better than hidden Markov model.


1994 ◽  
Vol 6 (2) ◽  
pp. 307-318 ◽  
Author(s):  
Pierre Baldi ◽  
Yves Chauvin

A simple learning algorithm for Hidden Markov Models (HMMs) is presented together with a number of variations. Unlike other classical algorithms such as the Baum-Welch algorithm, the algorithms described are smooth and can be used on-line (after each example presentation) or in batch mode, with or without the usual Viterbi most likely path approximation. The algorithms have simple expressions that result from using a normalized-exponential representation for the HMM parameters. All the algorithms presented are proved to be exact or approximate gradient optimization algorithms with respect to likelihood, log-likelihood, or cross-entropy functions, and as such are usually convergent. These algorithms can also be casted in the more general EM (Expectation-Maximization) framework where they can be viewed as exact or approximate GEM (Generalized Expectation-Maximization) algorithms. The mathematical properties of the algorithms are derived in the appendix.


Genetics ◽  
2009 ◽  
Vol 181 (4) ◽  
pp. 1567-1578 ◽  
Author(s):  
Simon Boitard ◽  
Christian Schlötterer ◽  
Andreas Futschik

2000 ◽  
Vol 12 (6) ◽  
pp. 1371-1398 ◽  
Author(s):  
Herbert Jaeger

A widely used class of models for stochastic systems is hidden Markov models. Systems that can be modeled by hidden Markov models are a proper subclass of linearly dependent processes, a class of stochastic systems known from mathematical investigations carried out over the past four decades. This article provides a novel, simple characterization of linearly dependent processes, called observable operator models. The mathematical properties of observable operator models lead to a constructive learning algorithm for the identification of linearly dependent processes. The core of the algorithm has a time complexity of O (N + nm3), where N is the size of training data, n is the number of distinguishable outcomes of observations, and m is model state-space dimension.


Sign in / Sign up

Export Citation Format

Share Document