scholarly journals Multimodal Deep Reinforcement Learning with Auxiliary Task for Obstacle Avoidance of Indoor Mobile Robot

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1363
Author(s):  
Hailuo Song ◽  
Ao Li ◽  
Tong Wang ◽  
Minghui Wang

It is an essential capability of indoor mobile robots to avoid various kinds of obstacles. Recently, multimodal deep reinforcement learning (DRL) methods have demonstrated great capability for learning control policies in robotics by using different sensors. However, due to the complexity of indoor environment and the heterogeneity of different sensor modalities, it remains an open challenge to obtain reliable and robust multimodal information for obstacle avoidance. In this work, we propose a novel multimodal DRL method with auxiliary task (MDRLAT) for obstacle avoidance of indoor mobile robot. In MDRLAT, a powerful bilinear fusion module is proposed to fully capture the complementary information from two-dimensional (2D) laser range findings and depth images, and the generated multimodal representation is subsequently fed into dueling double deep Q-network to output control commands for mobile robot. In addition, an auxiliary task of velocity estimation is introduced to further improve representation learning in DRL. Experimental results show that MDRLAT achieves remarkable performance in terms of average accumulated reward, convergence speed, and success rate. Moreover, experiments in both virtual and real-world testing environments further demonstrate the outstanding generalization capability of our method.

Author(s):  
Shumin Feng ◽  
Hailin Ren ◽  
Xinran Wang ◽  
Pinhas Ben-Tzvi

Abstract Obstacle avoidance is one of the core problems in the field of mobile robot autonomous navigation. This paper aims to solve the obstacle avoidance problem using Deep Reinforcement Learning. In previous work, various mathematical models have been developed to plan collision-free paths for such robots. In contrast, our method enables the robot to learn by itself from its experiences, and then fit a mathematical model by updating the parameters of a neural network. The derived mathematical model is capable of choosing an action directly according to the input sensor data for the mobile robot. In this paper, we develop an obstacle avoidance framework based on deep reinforcement learning. A 3D simulator is designed as well to provide the training and testing environments. In addition, we developed and compared obstacle avoidance methods based on different Deep Reinforcement Learning strategies, such as Deep Q-Network (DQN), Double Deep Q-Network (DDQN) and DDQN with Prioritized Experience Replay (DDQN-PER) using our simulator.


Mathematics ◽  
2020 ◽  
Vol 8 (8) ◽  
pp. 1254 ◽  
Author(s):  
Cheng-Hung Chen ◽  
Shiou-Yun Jeng ◽  
Cheng-Jian Lin

In this study, a fuzzy logic controller with the reinforcement improved differential search algorithm (FLC_R-IDS) is proposed for solving a mobile robot wall-following control problem. This study uses the reward and punishment mechanisms of reinforcement learning to train the mobile robot wall-following control. The proposed improved differential search algorithm uses parameter adaptation to adjust the control parameters. To improve the exploration of the algorithm, a change in the number of superorganisms is required as it involves a stopover site. This study uses reinforcement learning to guide the behavior of the robot. When the mobile robot satisfies three reward conditions, it gets reward +1. The accumulated reward value is used to evaluate the controller and to replace the next controller training. Experimental results show that, compared with the traditional differential search algorithm and the chaos differential search algorithm, the average error value of the proposed FLC_R-IDS in the three experimental environments is reduced by 12.44%, 22.54% and 25.98%, respectively. Final, the experimental results also show that the real mobile robot using the proposed method can effectively implement the wall-following control.


2012 ◽  
Vol 151 ◽  
pp. 498-502
Author(s):  
Jin Xue Zhang ◽  
Hai Zhu Pan

This paper is concerned with Q-learning , a very popular algorithm for reinforcement learning ,for obstacle avoidance through neural networks. The principle tells that the focus always must be on both ecological nice tasks and behaviours when designing on robot. Many robot systems have used behavior-based systems since the 1980’s.In this paper, the Khepera robot is trained through the proposed algorithm of Q-learning using the neural networks for the task of obstacle avoidance. In experiments with real and simulated robots, the neural networks approach can be used to make it possible for Q-learning to handle changes in the environment.


Sign in / Sign up

Export Citation Format

Share Document