Coevolving influence maps for spatial team tactics in a RTS game

Author(s):  
Phillipa Avery ◽  
Sushil Louis
Keyword(s):  
2020 ◽  
Vol 34 (10) ◽  
pp. 13849-13850
Author(s):  
Donghyeon Lee ◽  
Man-Je Kim ◽  
Chang Wook Ahn

In a real-time strategy (RTS) game, StarCraft II, players need to know the consequences before making a decision in combat. We propose a combat outcome predictor which utilizes terrain information as well as squad information. For training the model, we generated a StarCraft II combat dataset by simulating diverse and large-scale combat situations. The overall accuracy of our model was 89.7%. Our predictor can be integrated into the artificial intelligence agent for RTS games as a short-term decision-making module.


2011 ◽  
Vol 2011 ◽  
pp. 1-17
Author(s):  
Kurt Weissgerber ◽  
Gary B. Lamont ◽  
Brett J. Borghetti ◽  
Gilbert L. Peterson

The underlying goal of a competing agent in a discrete real-time strategy (RTS) game is to defeat an adversary. Strategic agents or participants must define an a priori plan to maneuver their resources in order to destroy the adversary and the adversary's resources as well as secure physical regions of the environment. This a priori plan can be generated by leveraging collected historical knowledge about the environment. This knowledge is then employed in the generation of a classification model for real-time decision-making in the RTS domain. The best way to generate a classification model for a complex problem domain depends on the characteristics of the solution space. An experimental method to determine solution space (search landscape) characteristics is through analysis of historical algorithm performance for solving the specific problem. We select a deterministic search technique and a stochastic search method for a priori classification model generation. These approaches are designed, implemented, and tested for a specific complex RTS game, Bos Wars. Their performance allows us to draw various conclusions about applying a competing agent in complex search landscapes associated with RTS games.


Author(s):  
Supaphon Kamon ◽  
Tung Due Nguyen ◽  
Tomohiro Harada ◽  
Ruck Thawonmas ◽  
Ikuko Nishikawa

2014 ◽  
Vol 5 (4) ◽  
pp. 251-258 ◽  
Author(s):  
R. Lara-Cabrera ◽  
C. Cotta ◽  
A.J. Fernández-Leiva

Author(s):  
José A. García Gutiérrez ◽  
Carlos Cotta ◽  
Antonio J. Fernández Leiva
Keyword(s):  

Author(s):  
Andrew Anderson ◽  
Jonathan Dodge ◽  
Amrita Sadarangani ◽  
Zoe Juozapaitis ◽  
Evan Newman ◽  
...  

We present a user study to investigate the impact of explanations on non-experts? understanding of reinforcement learning (RL) agents. We investigate both a common RL visualization, saliency maps (the focus of attention), and a more recent explanation type, reward-decomposition bars (predictions of future types of rewards). We designed a 124 participant, four-treatment experiment to compare participants? mental models of an RL agent in a simple Real-Time Strategy (RTS) game. Our results show that the combination of both saliency and reward bars were needed to achieve a statistically significant improvement in mental model score over the control. In addition, our qualitative analysis of the data reveals a number of effects for further study.


Sign in / Sign up

Export Citation Format

Share Document