Hierarchical Mixtures of Experts and the EM Algorithm
Keyword(s):
On Line
◽
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.
2014 ◽
Vol 2014
◽
pp. 1-10
◽
2014 ◽
Vol 2014
◽
pp. 1-7
◽
2011 ◽
pp. 1966-1973
2004 ◽
pp. 502-507
◽