scholarly journals Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals

Author(s):  
Navid Aghasadeghi ◽  
Timothy Bretl
2019 ◽  
Vol 2019 ◽  
pp. 1-8
Author(s):  
Xi-liang Chen ◽  
Lei Cao ◽  
Zhi-xiong Xu ◽  
Jun Lai ◽  
Chen-xi Li

The assumption of IRL is that demonstrations are optimally acting in an environment. In the past, most of the work on IRL needed to calculate optimal policies for different reward functions. However, this requirement is difficult to satisfy in large or continuous state space tasks. Let alone continuous action space. We propose a continuous maximum entropy deep inverse reinforcement learning algorithm for continuous state space and continues action space, which realizes the depth cognition of the environment model by the way of reconstructing the reward function based on the demonstrations, and a hot start mechanism based on demonstrations to make the training process faster and better. We compare this new approach to well-known IRL algorithms using Maximum Entropy IRL, DDPG, hot start DDPG, etc. Empirical results on classical control environments on OpenAI Gym: MountainCarContinues-v0 show that our approach is able to learn policies faster and better.


Author(s):  
Takumi Umemoto ◽  
Tohgoroh Matsui ◽  
Atsuko Mutoh ◽  
Koichi Moriyama ◽  
Nobuhiro Inuzuka

Author(s):  
Kazuteru Miyazaki ◽  
◽  
Shigenobu Kobayashi ◽  

Reinforcement learning involves learning to adapt to environments through the presentation of rewards – special input &#8211 serving as clues. To obtain quick rational policies, profit sharing (PS) [6], rational policy making algorithm (RPM) [7], penalty avoiding rational policy making algorithm (PARP) [8], and PS-r* [9] are used. They are called PS-based methods. When applying reinforcement learning to actual problems, treatment of continuous-valued input is sometimes required. A method [10] based on RPM is proposed as a PS-based method corresponding to the continuous-valued input, but only rewards exist and penalties cannot be suitably handled. We studied the treatment of continuous-valued input suitable for a PS-based method in which the environment includes both rewards and penalties. Specifically, we propose having PARP correspond to continuous-valued input while simultaneously targeting the attainment of rewards and avoiding penalties. We applied our proposal to the pole-cart balancing problem and confirmed its validity.


Sign in / Sign up

Export Citation Format

Share Document