Multiresolution State-Space Discretization Method for Q-Learning for One or More Regions of Interest

Author(s):  
Amanda Lampton ◽  
John Valasek
2016 ◽  
Vol 37 (14) ◽  
pp. 1251-1258 ◽  
Author(s):  
Hanzhong Liu ◽  
Minghai Li ◽  
Jue Fan ◽  
Shuanghong Huo

Robotica ◽  
2018 ◽  
Vol 37 (3) ◽  
pp. 445-468 ◽  
Author(s):  
Rupeng Yuan ◽  
Fuhai Zhang ◽  
Yu Wang ◽  
Yili Fu ◽  
Shuguo Wang

SUMMARYA Q-learning approach is often used for navigation in static environments where state space is easy to define. In this paper, a new Q-learning approach is proposed for navigation in dynamic environments by imitating human reasoning. As a model-free method, a Q-learning method does not require the environmental model in advance. The state space and the reward function in the proposed approach are defined according to human perception and evaluation, respectively. Specifically, approximate regions instead of accurate measurements are used to define states. Moreover, due to the limitation of robot dynamics, actions for each state are calculated by introducing a dynamic window that takes robot dynamics into account. The conducted tests show that the obstacle avoidance rate of the proposed approach can reach 90.5% after training, and the robot can always operate below the dynamics limitation.


Sign in / Sign up

Export Citation Format

Share Document