real time strategy game
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 16)

H-INDEX

12
(FIVE YEARS 1)

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3332
Author(s):  
Wenzhen Huang ◽  
Qiyue Yin ◽  
Junge Zhang ◽  
Kaiqi Huang

StarCraft is a real-time strategy game that provides a complex environment for AI research. Macromanagement, i.e., selecting appropriate units to build depending on the current state, is one of the most important problems in this game. To reduce the requirements for expert knowledge and enhance the coordination of the systematic bot, we select reinforcement learning (RL) to tackle the problem of macromanagement. We propose a novel deep RL method, Mean Asynchronous Advantage Actor-Critic (MA3C), which computes the approximate expected policy gradient instead of the gradient of sampled action to reduce the variance of the gradient, and encode the history queue with recurrent neural network to tackle the problem of imperfect information. The experimental results show that MA3C achieves a very high rate of winning, approximately 90%, against the weaker opponents and it improves the win rate about 30% against the stronger opponents. We also propose a novel method to visualize and interpret the policy learned by MA3C. Combined with the visualized results and the snapshots of games, we find that the learned macromanagement not only adapts to the game rules and the policy of the opponent bot, but also cooperates well with the other modules of MA3C-Bot.


Heliyon ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. e06724
Author(s):  
Natalia Jakubowska ◽  
Paweł Dobrowolski ◽  
Natalia Rutkowska ◽  
Maciej Skorko ◽  
Monika Myśliwiec ◽  
...  

2020 ◽  
Vol 11 (4) ◽  
Author(s):  
Leandro Vian ◽  
Marcelo De Gomensoro Malheiros

In recent years Machine Learning techniques have become the driving force behind the worldwide emergence of Artificial Intelligence, producing cost-effective and precise tools for pattern recognition and data analysis. A particular approach for the training of neural networks, Reinforcement Learning (RL), achieved prominence creating almost unbeatable artificial opponents in board games like Chess or Go, and also on video games. This paper gives an overview of Reinforcement Learning and tests this approach against a very popular real-time strategy game, Starcraft II. Our goal is to examine the tools and algorithms readily available for RL, also addressing different scenarios where a neural network can be linked to Starcraft II to learn by itself. This work describes both the technical issues involved and the preliminary results obtained by the application of two specific training strategies, A2C and DQN.


2020 ◽  
Vol 46 (4) ◽  
pp. 349-355
Author(s):  
Insung Baek ◽  
Hyungu Kahng ◽  
Yoon Sang Cho ◽  
Youngjae Lee ◽  
Young Joon Park ◽  
...  

Author(s):  
Cong Fei ◽  
Bin Wang ◽  
Yuzheng Zhuang ◽  
Zongzhang Zhang ◽  
Jianye Hao ◽  
...  

Generative adversarial imitation learning (GAIL) has shown promising results by taking advantage of generative adversarial nets, especially in the field of robot learning. However, the requirement of isolated single modal demonstrations limits the scalability of the approach to real world scenarios such as autonomous vehicles' demand for a proper understanding of human drivers' behavior. In this paper, we propose a novel multi-modal GAIL framework, named Triple-GAIL, that is able to learn skill selection and imitation jointly from both expert demonstrations and continuously generated experiences with data augmentation purpose by introducing an auxiliary selector. We provide theoretical guarantees on the convergence to optima for both of the generator and the selector respectively. Experiments on real driver trajectories and real-time strategy game datasets demonstrate that Triple-GAIL can better fit multi-modal behaviors close to the demonstrators and outperforms state-of-the-art methods.


Author(s):  
Guillaume Lorthioir ◽  
Katsumi Inoue

Digital games have proven to be valuable simulation environments for plan and goal recognition. Though, goal recognition is a hard problem, especially in the field of digital games where players unintentionally achieve goals through exploratory actions, abandon goals with little warning, or adopt new goals based upon recent or prior events. In this paper, a method using simulation and bayesian programming to infer the player's strategy in a Real-Time-Strategy game (RTS) is described, as well as how we could use it to make more adaptive AI for this kind of game and thus make more challenging and entertaining games for the players.


Sign in / Sign up

Export Citation Format

Share Document