AbstractWe propose a new algorithm for spectral learning of Hidden Markov Models (HMM).
In contrast to the standard approach, we do not estimate the
parameters of the HMM directly, but
construct an estimate for the joint probability distribution.
The idea is based on the representation of a joint probability distribution
as an N-th-order tensor with low ranks represented in the tensor train (TT) format.
Using TT-format, we get an approximation by minimizing the Frobenius distance between the empirical
joint probability distribution and tensors with low TT-ranks with core tensors normalization constraints.
We propose an algorithm for the solution of the optimization problem that is based on the
alternating least squares (ALS) approach and develop its fast version for sparse tensors.
The order of the tensor d is a parameter of our algorithm. We have compared the performance of our algorithm
with the existing algorithm by Hsu, Kakade and Zhang proposed in 2009 and found that it is much more robust if the number of hidden states is overestimated.