Exact Bayesian Inference and Model Selection for Stochastic Models of Epidemics Among a Community of Households

2007 ◽  
Vol 34 (2) ◽  
pp. 259-274 ◽  
Author(s):  
DAMIAN CLANCY ◽  
PHILIP D. O'NEILL
Author(s):  
Masaaki Imaizumi ◽  
Ryohei Fujimaki

This paper proposes a novel direct policy search (DPS) method with model selection for partially observed Markov decision processes (POMDPs). DPSs have been standard for learning POMDPs due to their computational efficiency and natural ability to maximize total rewards. An important open challenge for the best use of DPS methods is model selection, i.e., determination of the proper dimensionality of hidden states and complexity of policy functions, to mitigate overfitting in highly-flexible model representations of POMDPs. This paper bridges Bayesian inference and reward maximization and derives marginalized weighted log-likelihood~(MWL) for POMDPs which takes both advantages of Bayesian model selection and DPS. Then we propose factorized asymptotic Bayesian policy search (FABPS) to explore the model and the policy which maximizes MWL by expanding recently-developed factorized asymptotic Bayesian inference. Experimental results show that FABPS outperforms state-of-the-art model selection methods for POMDPs, with respect both to model selection and to expected total rewards.


Sign in / Sign up

Export Citation Format

Share Document