scholarly journals Safe Continuous Control with Constrained Model-Based Policy Optimization

Author(s):  
Moritz A. Zanger ◽  
Karam Daaboul ◽  
J. Marius Zollner
Author(s):  
Yinlam Chow ◽  
Brandon Cui ◽  
Moonkyung Ryu ◽  
Mohammad Ghavamzadeh

Model-based reinforcement learning (RL) algorithms allow us to combine model-generated data with those collected from interaction with the real system in order to alleviate the data efficiency problem in RL. However, designing such algorithms is often challenging because the bias in simulated data may overshadow the ease of data generation. A potential solution to this challenge is to jointly learn and improve model and policy using a universal objective function. In this paper, we leverage the connection between RL and probabilistic inference, and formulate such an objective function as a variational lower-bound of a log-likelihood. This allows us to use expectation maximization (EM) and iteratively fix a baseline policy and learn a variational distribution, consisting of a model and a policy (E-step), followed by improving the baseline policy given the learned variational distribution (M-step). We propose model-based and model-free policy iteration (actor-critic) style algorithms for the E-step and show how the variational distribution learned by them can be used to optimize the M-step in a fully model-based fashion. Our experiments on a number of continuous control tasks show that our model-based (E-step) algorithm, called variational model-based policy optimization (VMBPO), is more sample-efficient and robust to hyper-parameter tuning than its model-free (E-step) counterpart. Using the same control tasks, we also compare VMBPO with several state-of-the-art model-based and model-free RL algorithms and show its sample efficiency and performance.


2021 ◽  
Vol 54 (5) ◽  
pp. 19-24
Author(s):  
Tyler Westenbroek ◽  
Ayush Agrawal ◽  
Fernando Castañeda ◽  
S Shankar Sastry ◽  
Koushil Sreenath

2022 ◽  
pp. 1-12
Author(s):  
Shuailong Li ◽  
Wei Zhang ◽  
Huiwen Zhang ◽  
Xin Zhang ◽  
Yuquan Leng

Model-free reinforcement learning methods have successfully been applied to practical applications such as decision-making problems in Atari games. However, these methods have inherent shortcomings, such as a high variance and low sample efficiency. To improve the policy performance and sample efficiency of model-free reinforcement learning, we propose proximal policy optimization with model-based methods (PPOMM), a fusion method of both model-based and model-free reinforcement learning. PPOMM not only considers the information of past experience but also the prediction information of the future state. PPOMM adds the information of the next state to the objective function of the proximal policy optimization (PPO) algorithm through a model-based method. This method uses two components to optimize the policy: the error of PPO and the error of model-based reinforcement learning. We use the latter to optimize a latent transition model and predict the information of the next state. For most games, this method outperforms the state-of-the-art PPO algorithm when we evaluate across 49 Atari games in the Arcade Learning Environment (ALE). The experimental results show that PPOMM performs better or the same as the original algorithm in 33 games.


2021 ◽  
pp. 174-189
Author(s):  
Jian Shen ◽  
Mingcheng Chen ◽  
Zhicheng Zhang ◽  
Zhengyu Yang ◽  
Weinan Zhang ◽  
...  

2020 ◽  
Vol 124 (1) ◽  
pp. 295-304 ◽  
Author(s):  
Raz Leib ◽  
Marta Russo ◽  
Andrea d’Avella ◽  
Ilana Nisky

While ballistic hand reaching movements are characterized by smooth position and velocity signals, the activity of the muscles exhibits bursts and silent periods. Here, we propose that a model based on bang-bang control provides the link between the abrupt changes in the muscle activity and the smooth reaching trajectory. Using bang-bang control instead of continuous control may simplify the design of prostheses and other physical human-robot interaction systems.


2018 ◽  
Vol 51 (15) ◽  
pp. 515-520 ◽  
Author(s):  
Marco Quaglio ◽  
Eric S. Fraga ◽  
Federico Galvanin

Sign in / Sign up

Export Citation Format

Share Document