Robot Arm Control Technique using Deep Reinforcement Learning based on Dueling and Bottleneck Structure

2021 ◽  
Vol 70 (12) ◽  
pp. 1906-1913
Author(s):  
Seong Joon Kim ◽  
Byung Wook Kim
Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2534
Author(s):  
Oualid Doukhi ◽  
Deok-Jin Lee

Autonomous navigation and collision avoidance missions represent a significant challenge for robotics systems as they generally operate in dynamic environments that require a high level of autonomy and flexible decision-making capabilities. This challenge becomes more applicable in micro aerial vehicles (MAVs) due to their limited size and computational power. This paper presents a novel approach for enabling a micro aerial vehicle system equipped with a laser range finder to autonomously navigate among obstacles and achieve a user-specified goal location in a GPS-denied environment, without the need for mapping or path planning. The proposed system uses an actor–critic-based reinforcement learning technique to train the aerial robot in a Gazebo simulator to perform a point-goal navigation task by directly mapping the noisy MAV’s state and laser scan measurements to continuous motion control. The obtained policy can perform collision-free flight in the real world while being trained entirely on a 3D simulator. Intensive simulations and real-time experiments were conducted and compared with a nonlinear model predictive control technique to show the generalization capabilities to new unseen environments, and robustness against localization noise. The obtained results demonstrate our system’s effectiveness in flying safely and reaching the desired points by planning smooth forward linear velocity and heading rates.


Author(s):  
Yusuke Wakita ◽  
Noboru Takizawa ◽  
Kentaro Nagata ◽  
Kazushige Magatani

2021 ◽  
pp. 1-1
Author(s):  
Reshma Kar ◽  
Lidia Ghosh ◽  
Amit Konar ◽  
Aruna Chakraborty ◽  
Atulya K. Nagar

Sign in / Sign up

Export Citation Format

Share Document