scholarly journals Reinforcement Learning and Deep Learning Based Lateral Control for Autonomous Driving [Application Notes]

2019 ◽  
Vol 14 (2) ◽  
pp. 83-98 ◽  
Author(s):  
Dong Li ◽  
Dongbin Zhao ◽  
Qichao Zhang ◽  
Yaran Chen
Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2375
Author(s):  
Jingjing Xiong ◽  
Lai-Man Po ◽  
Kwok Wai Cheung ◽  
Pengfei Xian ◽  
Yuzhi Zhao ◽  
...  

Deep reinforcement learning (DRL) has been utilized in numerous computer vision tasks, such as object detection, autonomous driving, etc. However, relatively few DRL methods have been proposed in the area of image segmentation, particularly in left ventricle segmentation. Reinforcement learning-based methods in earlier works often rely on learning proper thresholds to perform segmentation, and the segmentation results are inaccurate due to the sensitivity of the threshold. To tackle this problem, a novel DRL agent is designed to imitate the human process to perform LV segmentation. For this purpose, we formulate the segmentation problem as a Markov decision process and innovatively optimize it through DRL. The proposed DRL agent consists of two neural networks, i.e., First-P-Net and Next-P-Net. The First-P-Net locates the initial edge point, and the Next-P-Net locates the remaining edge points successively and ultimately obtains a closed segmentation result. The experimental results show that the proposed model has outperformed the previous reinforcement learning methods and achieved comparable performances compared with deep learning baselines on two widely used LV endocardium segmentation datasets, namely Automated Cardiac Diagnosis Challenge (ACDC) 2017 dataset, and Sunnybrook 2009 dataset. Moreover, the proposed model achieves higher F-measure accuracy compared with deep learning methods when training with a very limited number of samples.


Author(s):  
Yao Deng ◽  
Tiehua Zhang ◽  
Guannan Lou ◽  
Xi Zheng ◽  
Jiong Jin ◽  
...  

Author(s):  
Sangseok Yun ◽  
Jae-Mo Kang ◽  
Jeongseok Ha ◽  
Sangho Lee ◽  
Dong-Woo Ryu ◽  
...  

2021 ◽  
Vol 31 (3) ◽  
pp. 1-26
Author(s):  
Aravind Balakrishnan ◽  
Jaeyoung Lee ◽  
Ashish Gaurav ◽  
Krzysztof Czarnecki ◽  
Sean Sedwards

Reinforcement learning (RL) is an attractive way to implement high-level decision-making policies for autonomous driving, but learning directly from a real vehicle or a high-fidelity simulator is variously infeasible. We therefore consider the problem of transfer reinforcement learning and study how a policy learned in a simple environment using WiseMove can be transferred to our high-fidelity simulator, W ise M ove . WiseMove is a framework to study safety and other aspects of RL for autonomous driving. W ise M ove accurately reproduces the dynamics and software stack of our real vehicle. We find that the accurately modelled perception errors in W ise M ove contribute the most to the transfer problem. These errors, when even naively modelled in WiseMove , provide an RL policy that performs better in W ise M ove than a hand-crafted rule-based policy. Applying domain randomization to the environment in WiseMove yields an even better policy. The final RL policy reduces the failures due to perception errors from 10% to 2.75%. We also observe that the RL policy has significantly less reliance on velocity compared to the rule-based policy, having learned that its measurement is unreliable.


Author(s):  
Khan Muhammad ◽  
Amin Ullah ◽  
Jaime Lloret ◽  
Javier Del Ser ◽  
Victor Hugo C. de Albuquerque

Sign in / Sign up

Export Citation Format

Share Document