scholarly journals BUILDING A PARALLEL DECISION-MAKING SYSTEM BASED ON RULE-BASED CLASSIFIERS IN MOLECULAR ROBOTICS

2015 ◽  
Vol 8 (2) ◽  
pp. 944-965
Author(s):  
Wibowo Adi ◽  
Kosuke Sekiyama
2015 ◽  
Vol 1 (1) ◽  
pp. 29-34
Author(s):  
Sergei Shvorov ◽  
◽  
Dmitry Komarchuk ◽  
Peter Ohrimenko ◽  
Dmitry Chyrchenko ◽  
...  

2021 ◽  
Vol 31 (3) ◽  
pp. 1-26
Author(s):  
Aravind Balakrishnan ◽  
Jaeyoung Lee ◽  
Ashish Gaurav ◽  
Krzysztof Czarnecki ◽  
Sean Sedwards

Reinforcement learning (RL) is an attractive way to implement high-level decision-making policies for autonomous driving, but learning directly from a real vehicle or a high-fidelity simulator is variously infeasible. We therefore consider the problem of transfer reinforcement learning and study how a policy learned in a simple environment using WiseMove can be transferred to our high-fidelity simulator, W ise M ove . WiseMove is a framework to study safety and other aspects of RL for autonomous driving. W ise M ove accurately reproduces the dynamics and software stack of our real vehicle. We find that the accurately modelled perception errors in W ise M ove contribute the most to the transfer problem. These errors, when even naively modelled in WiseMove , provide an RL policy that performs better in W ise M ove than a hand-crafted rule-based policy. Applying domain randomization to the environment in WiseMove yields an even better policy. The final RL policy reduces the failures due to perception errors from 10% to 2.75%. We also observe that the RL policy has significantly less reliance on velocity compared to the rule-based policy, having learned that its measurement is unreliable.


Sign in / Sign up

Export Citation Format

Share Document