scholarly journals Autonomous Navigation Using Deep Reinforcement Learning in ROS

Author(s):  
Ganesh Khekare ◽  
Shahrukh Sheikh

For an autonomous robot to move safely in an environment where people are around and moving dynamically without knowing their goal position, it is required to set navigation rules and human behaviors. This problem is challenging with the highly stochastic behavior of people. Previous methods believe to provide features of human behavior, but these features vary from person to person. The method focuses on setting social norms that are telling the robot what not to do. With deep reinforcement learning, it has become possible to set a time-efficient navigation scheme that regulates social norms. The solution enables mobile robot full autonomy along with collision avoidance in people rich environment.

2020 ◽  
Vol 39 (7) ◽  
pp. 856-892 ◽  
Author(s):  
Tingxiang Fan ◽  
Pinxin Long ◽  
Wenxi Liu ◽  
Jia Pan

Developing a safe and efficient collision-avoidance policy for multiple robots is challenging in the decentralized scenarios where each robot generates its paths with limited observation of other robots’ states and intentions. Prior distributed multi-robot collision-avoidance systems often require frequent inter-robot communication or agent-level features to plan a local collision-free action, which is not robust and computationally prohibitive. In addition, the performance of these methods is not comparable with their centralized counterparts in practice. In this article, we present a decentralized sensor-level collision-avoidance policy for multi-robot systems, which shows promising results in practical applications. In particular, our policy directly maps raw sensor measurements to an agent’s steering commands in terms of the movement velocity. As a first step toward reducing the performance gap between decentralized and centralized methods, we present a multi-scenario multi-stage training framework to learn an optimal policy. The policy is trained over a large number of robots in rich, complex environments simultaneously using a policy-gradient-based reinforcement-learning algorithm. The learning algorithm is also integrated into a hybrid control framework to further improve the policy’s robustness and effectiveness. We validate the learned sensor-level collision-3avoidance policy in a variety of simulated and real-world scenarios with thorough performance evaluations for large-scale multi-robot systems. The generalization of the learned policy is verified in a set of unseen scenarios including the navigation of a group of heterogeneous robots and a large-scale scenario with 100 robots. Although the policy is trained using simulation data only, we have successfully deployed it on physical robots with shapes and dynamics characteristics that are different from the simulated agents, in order to demonstrate the controller’s robustness against the simulation-to-real modeling error. Finally, we show that the collision-avoidance policy learned from multi-robot navigation tasks provides an excellent solution for safe and effective autonomous navigation for a single robot working in a dense real human crowd. Our learned policy enables a robot to make effective progress in a crowd without getting stuck. More importantly, the policy has been successfully deployed on different types of physical robot platforms without tedious parameter tuning. Videos are available at https://sites.google.com/view/hybridmrca .


2021 ◽  
Vol 2138 (1) ◽  
pp. 012011
Author(s):  
Yanwei Zhao ◽  
Yinong Zhang ◽  
Shuying Wang

Abstract Path planning refers to that the mobile robot can obtain the surrounding environment information and its own state information through the sensor carried by itself, which can avoid obstacles and move towards the target point. Deep reinforcement learning consists of two parts: reinforcement learning and deep learning, mainly used to deal with perception and decision-making problems, has become an important research branch in the field of artificial intelligence. This paper first introduces the basic knowledge of deep learning and reinforcement learning. Then, the research status of deep reinforcement learning algorithm based on value function and strategy gradient in path planning is described, and the application research of deep reinforcement learning in computer game, video game and autonomous navigation is described. Finally, I made a brief summary and outlook on the algorithms and applications of deep reinforcement learning.


In this project, we have designed and developed an autonomous robot that is powered by Robot Operating System (ROS). The capabilities of the robot include autonomous navigation, image tracking and mapping. OpenCV has been implemented in the on-board microprocessor to process the images that are captured by the general purpose webcams on the robot. A microcontroller has also been used to control the motors. The ultimate aim of this project is to develop a mobile robot capable of making its own decisions based on the images received.


Author(s):  
Olusanya Agunbiade ◽  
Tranos Zuva

The important characteristic that could assist in autonomous navigation is the ability of a mobile robot to concurrently construct a map for an unknown environment and localize itself within the same environment. This computational problem is known as Simultaneous Localization and Mapping (SLAM). In literature, researchers have studied this approach extensively and have proposed a lot of improvement towards it. More so, we are experiencing a steady transition of this technology to industries. However, there are still setbacks limiting the full acceptance of this technology even though the research had been conducted over the last 30 years. Thus, to determine the problems facing SLAM, this paper conducted a review on various foundation and recent SLAM algorithms. Challenges and open issues alongside the research direction for this area were discussed. However, towards addressing the problem discussed, a novel SLAM technique will be proposed.


2017 ◽  
Vol 92 (2) ◽  
pp. 359-380 ◽  
Author(s):  
Eddie Clemente ◽  
Marlen Meza-Sánchez ◽  
Eusebio Bugarin ◽  
Ana Yaveni Aguilar-Bustos

Sign in / Sign up

Export Citation Format

Share Document