rts game
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 11)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
pp. 1-15
Author(s):  
Adam Dachowicz ◽  
Kshitij Mall ◽  
Prajwal Balasubramani ◽  
Apoorv Maheshwari ◽  
Jitesh H. Panchal ◽  
...  

Abstract In this paper, we adapt computational design approaches, widely used by the engineering design community, to address the unique challenges associated with mission design using RTS games. Specifically, we present a modeling approach that combines experimental design techniques, meta-modeling using convolutional neural networks (CNNs), uncertainty quantification, and explainable AI (XAI). We illustrate the approach using an open-source real-time strategy (RTS) game called microRTS. The modeling approach consists of microRTS player agents (bots), design of experiments that arranges games between identical agents with asymmetric initial conditions, and an AI infused layer comprising CNNs, XAI, and uncertainty analysis through Monte Carlo Dropout Network analysis that allows analysis of game balance. A sample balanced game and corresponding predictions and SHapley Additive exPlanations (SHAP) are presented in this study. Three additional perturbations were introduced to this balanced gameplay and the observations about important features of the game using SHAP are presented. Our results show that this analysis can successfully predict probability of win for self-play microRTS games, as well as capture uncertainty in predictions that can be used to guide additional data collection to improve the model, or refine the game balance measure.


Author(s):  
Guillaume Lorthioir ◽  
Katsumi Inoue

Digital games have proven to be valuable simulation environments for plan and goal recognition. Though, goal recognition is a hard problem, especially in the field of digital games where players unintentionally achieve goals through exploratory actions, abandon goals with little warning, or adopt new goals based upon recent or prior events. In this paper, a method using simulation and bayesian programming to infer the player's strategy in a Real-Time-Strategy game (RTS) is described, as well as how we could use it to make more adaptive AI for this kind of game and thus make more challenging and entertaining games for the players.


2020 ◽  
Vol 34 (10) ◽  
pp. 13849-13850
Author(s):  
Donghyeon Lee ◽  
Man-Je Kim ◽  
Chang Wook Ahn

In a real-time strategy (RTS) game, StarCraft II, players need to know the consequences before making a decision in combat. We propose a combat outcome predictor which utilizes terrain information as well as squad information. For training the model, we generated a StarCraft II combat dataset by simulating diverse and large-scale combat situations. The overall accuracy of our model was 89.7%. Our predictor can be integrated into the artificial intelligence agent for RTS games as a short-term decision-making module.


Author(s):  
Andrew Anderson ◽  
Jonathan Dodge ◽  
Amrita Sadarangani ◽  
Zoe Juozapaitis ◽  
Evan Newman ◽  
...  

We present a user study to investigate the impact of explanations on non-experts? understanding of reinforcement learning (RL) agents. We investigate both a common RL visualization, saliency maps (the focus of attention), and a more recent explanation type, reward-decomposition bars (predictions of future types of rewards). We designed a 124 participant, four-treatment experiment to compare participants? mental models of an RL agent in a simple Real-Time Strategy (RTS) game. Our results show that the combination of both saliency and reward bars were needed to achieve a statistically significant improvement in mental model score over the control. In addition, our qualitative analysis of the data reveals a number of effects for further study.


Author(s):  
Tianyu Liu ◽  
Zijie Zheng ◽  
Hongchang Li ◽  
Kaigui Bian ◽  
Lingyang Song

Game AI is of great importance as games are simulations of reality. Recent research on game AI has shown much progress in various kinds of games, such as console games, board games and MOBA games. However, the exploration in RTS games remains a challenge for their huge state space, imperfect information, sparse rewards and various strategies. Besides, the typical card-based RTS games have complex card features and are still lacking solutions. We present a deep model SEAT (selection-attention) to play card-based RTS games. The SEAT model includes two parts, a selection part for card choice and an attention part for card usage, and it learns from scratch via deep reinforcement learning. Comprehensive experiments are performed on Clash Royale, a popular mobile card-based RTS game. Empirical results show that the SEAT model agent makes it to reach a high winning rate against rule-based agents and decision-tree-based agent.


2019 ◽  
Vol 14 (3) ◽  
pp. 8-18 ◽  
Author(s):  
Nicolas A. Barriga ◽  
Marius Stanescu ◽  
Felipe Besoain ◽  
Michael Buro

Sign in / Sign up

Export Citation Format

Share Document