Shifting Deep Reinforcement Learning Algorithm towards Training Directly in Transient Real-World Environment: A Case Study in Powertrain Control

Author(s):  
Bo Hu ◽  
Jiaxi Li
Author(s):  
Y. Han ◽  
A. Yilmaz

Abstract. In this work, we propose an approach for an autonomous agent that learns to navigate in an unknown map in a real-world environment. Recognizing that the real-world environment is changing overtime such as road-closure happening due to construction work, a key contribution of our paper is adopt the dynamic adaptation characteristic of the reinforcement learning approach and develop a dynamic routing ability for our agent. Our method is based on the Q-learning algorithm and modifies it into a double-critic Qlearning model (DCQN) that only uses visual input without other aids such as GPS. Our treatment of the problem enables the agent to learn the navigation policy while interacting with the environment. We demonstrate that the agent can learn navigating to the destination kilometers away from the starting point in a real world scenario and has the ability to respond to environment changes while learning to adjust the routing plan dynamically by adjusting the old knowledge. The supplementary video can be accessed at the following link: https://www.youtube.com/watch?v=tknsxVuNwkg.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 417-428
Author(s):  
Yanyan Dai ◽  
KiDong Lee ◽  
SukGyu Lee

For real applications, rotary inverted pendulum systems have been known as the basic model in nonlinear control systems. If researchers have no deep understanding of control, it is difficult to control a rotary inverted pendulum platform using classic control engineering models, as shown in section 2.1. Therefore, without classic control theory, this paper controls the platform by training and testing reinforcement learning algorithm. Many recent achievements in reinforcement learning (RL) have become possible, but there is a lack of research to quickly test high-frequency RL algorithms using real hardware environment. In this paper, we propose a real-time Hardware-in-the-loop (HIL) control system to train and test the deep reinforcement learning algorithm from simulation to real hardware implementation. The Double Deep Q-Network (DDQN) with prioritized experience replay reinforcement learning algorithm, without a deep understanding of classical control engineering, is used to implement the agent. For the real experiment, to swing up the rotary inverted pendulum and make the pendulum smoothly move, we define 21 actions to swing up and balance the pendulum. Comparing Deep Q-Network (DQN), the DDQN with prioritized experience replay algorithm removes the overestimate of Q value and decreases the training time. Finally, this paper shows the experiment results with comparisons of classic control theory and different reinforcement learning algorithms.


Sign in / Sign up

Export Citation Format

Share Document