Reinforcement Learning Based Autonomous Air Combat with Energy Budgets

2022 ◽  
Author(s):  
Hasan Isci ◽  
Emre Koyuncu
2021 ◽  
Vol 32 (6) ◽  
pp. 1421-1438
Author(s):  
Zhang Jiandong ◽  
Yang Qiming ◽  
Shi Guoqing ◽  
Lu Yi ◽  
Wu Yong

Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 279 ◽  
Author(s):  
Xianbing Zhang ◽  
Guoqing Liu ◽  
Chaojie Yang ◽  
Jiang Wu

With the development of information technology, the degree of intelligence in air combat is increasing, and the demand for automated intelligent decision-making systems is becoming more intense. Based on the characteristics of over-the-horizon air combat, this paper constructs a super-horizon air combat training environment, which includes aircraft model modeling, air combat scene design, enemy aircraft strategy design, and reward and punishment signal design. In order to improve the efficiency of the reinforcement learning algorithm for the exploration of strategy space, this paper proposes a heuristic Q-Network method that integrates expert experience, and uses expert experience as a heuristic signal to guide the search process. At the same time, heuristic exploration and random exploration are combined. Aiming at the over-the-horizon air combat maneuver decision problem, the heuristic Q-Network method is adopted to train the neural network model in the over-the-horizon air combat training environment. Through continuous interaction with the environment, self-learning of the air combat maneuver strategy is realized. The efficiency of the heuristic Q-Network method and effectiveness of the air combat maneuver strategy are verified by simulation experiments.


Author(s):  
Haiyin Piao ◽  
Zhixiao Sun ◽  
Guanglei Meng ◽  
Hechang Chen ◽  
Bohao Qu ◽  
...  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 363-378 ◽  
Author(s):  
Qiming Yang ◽  
Jiandong Zhang ◽  
Guoqing Shi ◽  
Jinwen Hu ◽  
Yong Wu

Sign in / Sign up

Export Citation Format

Share Document