Model selection for parameter identifiability problem in Bayesian inference of building energy model

2021 ◽  
pp. 111059
Author(s):  
Dong Hyuk Yi ◽  
Cheol Soo Park
2019 ◽  
Vol 198 ◽  
pp. 318-328 ◽  
Author(s):  
Dong Hyuk Yi ◽  
Deuk Woo Kim ◽  
Cheol Soo Park

2021 ◽  
pp. 110998
Author(s):  
Matthew J Simpson ◽  
Alexander P Browning ◽  
David J Warne ◽  
Oliver J Maclaren ◽  
Ruth E Baker

Author(s):  
Danlin Hou ◽  
Ibrahim Galal Hassan ◽  
Liangzhu (Leon) Wang

Abstract The building sector accounts for nearly 40% of global energy consumption and plays a critical role in societal energy security and sustainability. A building energy model (BEM) simulates complex building physics and provides insights into various energy-saving measures’ performance. The analysis based on BEMs has become an essential approach to slowing down increasing building energy consumption. The reliability and accuracy of BEMs have a high impact on decision-making. However, how to calibrate a building energy model has remained a challenge. In this study, Bayesian inference was applied to the calibration of an office building model under the arid weather conditions of Doha, Qatar. The coefficient of variation with a root-mean-square error of calibration and validation are 1.1% and 1.5%, respectively, which is highly satisfied with the monthly calibration tolerance of 15% required by ASHRAE Guideline 14. Additionally, the calibrated parameter results are with probabilities and degrees of confidence, so they are more reasonable and comprehensive than traditional deterministic calibration methods. This study conducted a sensitivity analysis to select the model’s dominant parameters under hot/arid weather conditions. This study will be among the first studies of stochastic calibration based on Bayesian inference for building energy performances in arid weather.


Author(s):  
Masaaki Imaizumi ◽  
Ryohei Fujimaki

This paper proposes a novel direct policy search (DPS) method with model selection for partially observed Markov decision processes (POMDPs). DPSs have been standard for learning POMDPs due to their computational efficiency and natural ability to maximize total rewards. An important open challenge for the best use of DPS methods is model selection, i.e., determination of the proper dimensionality of hidden states and complexity of policy functions, to mitigate overfitting in highly-flexible model representations of POMDPs. This paper bridges Bayesian inference and reward maximization and derives marginalized weighted log-likelihood~(MWL) for POMDPs which takes both advantages of Bayesian model selection and DPS. Then we propose factorized asymptotic Bayesian policy search (FABPS) to explore the model and the policy which maximizes MWL by expanding recently-developed factorized asymptotic Bayesian inference. Experimental results show that FABPS outperforms state-of-the-art model selection methods for POMDPs, with respect both to model selection and to expected total rewards.


Sign in / Sign up

Export Citation Format

Share Document