scholarly journals Model Mediated Teleoperation with a Hand-Arm Exoskeleton in Long Time Delays Using Reinforcement Learning

Author(s):  
Hadi Beik-Mohammadi ◽  
Matthias Kerzel ◽  
Benedikt Pleintinger ◽  
Thomas Hulin ◽  
Philipp Reisich ◽  
...  
1976 ◽  
Vol 12 (22) ◽  
pp. 574 ◽  
Author(s):  
F.D. Nunes ◽  
N.B. Patel ◽  
J.E. Ripper
Keyword(s):  

Author(s):  
Nicolas Bougie ◽  
Ryutaro Ichise

Deep reinforcement learning (DRL) methods traditionally struggle with tasks where environment rewards are sparse or delayed, which entails that exploration remains one of the key challenges of DRL. Instead of solely relying on extrinsic rewards, many state-of-the-art methods use intrinsic curiosity as exploration signal. While they hold promise of better local exploration, discovering global exploration strategies is beyond the reach of current methods. We propose a novel end-to-end intrinsic reward formulation that introduces high-level exploration in reinforcement learning. Our curiosity signal is driven by a fast reward that deals with local exploration and a slow reward that incentivizes long-time horizon exploration strategies. We formulate curiosity as the error in an agent’s ability to reconstruct the observations given their contexts. Experimental results show that this high-level exploration enables our agents to outperform prior work in several Atari games.


Sign in / Sign up

Export Citation Format

Share Document