Reinforcement learning based energy efficient robot relay for unmanned aerial vehicles against smart jamming

2021 ◽  
Vol 65 (1) ◽  
Author(s):  
Xiaozhen Lu ◽  
Jingfang Jie ◽  
Zihan Lin ◽  
Liang Xiao ◽  
Jin Li ◽  
...  
2021 ◽  
Author(s):  
Zhenyu Hu ◽  
Pengcheng Pi ◽  
Zhenyu Wu ◽  
Yunhe Xue ◽  
Jiayi Shen ◽  
...  

2018 ◽  
Vol 3 (3) ◽  
pp. 39-48 ◽  
Author(s):  
Geyi Sheng ◽  
Minghui Min ◽  
Liang Xiao ◽  
Sicong Liu

2019 ◽  
Vol 9 (4) ◽  
pp. 669 ◽  
Author(s):  
Qiao Cheng ◽  
Xiangke Wang ◽  
Jian Yang ◽  
Lincheng Shen

This paper focuses on one of the collision avoidance scenarios for unmanned aerial vehicles (UAVs), where the UAV needs to avoid collision with the enemy UAV during its flying path to the goal point. Such a type of problem is defined as the enemy avoidance problem in this paper. To deal with this problem, a learning based framework is proposed. Under this framework, the enemy avoidance problem is formulated as a Markov Decision Process (MDP), and the maneuver policies for the UAV are learned based on a temporal-difference reinforcement learning method called Sarsa. To handle the enemy avoidance problem in continuous state space, the Cerebellar Model Arithmetic Computer (CMAC) function approximation technique is embodied in the proposed framework. Furthermore, a hardware-in-the-loop (HITL) simulation environment is established. Simulation results show that the UAV agent can learn a satisfying policy under the proposed framework. Comparing with the random policy and the fixed-rule policy, the learned policy can achieve a far higher possibility in reaching the goal point without colliding with the enemy UAV.


2017 ◽  
Vol 16 (11) ◽  
pp. 7574-7589 ◽  
Author(s):  
Mohammad Mozaffari ◽  
Walid Saad ◽  
Mehdi Bennis ◽  
Merouane Debbah

Sign in / Sign up

Export Citation Format

Share Document