Intra-task Curriculum Learning for Faster Reinforcement Learning in Video Games

Author(s):  
Nathaniel du Preez-Wilkinson ◽  
Marcus Gallagher ◽  
Xuelei Hu
2021 ◽  
Author(s):  
Anssi Kanervisto ◽  
Christian Scheller ◽  
Yanick Schraner ◽  
Ville Hautamaki

Author(s):  
Wang Meng ◽  
Chen Yingfeng ◽  
Lv Tangjie ◽  
Song Yan ◽  
Guan Kai ◽  
...  

2021 ◽  
pp. 1-1
Author(s):  
Pei Xu ◽  
Qiyue Yin ◽  
Junge Zhang ◽  
Kaiqi Huang

2020 ◽  
Vol 34 (04) ◽  
pp. 4501-4510
Author(s):  
Karol Kurach ◽  
Anton Raichuk ◽  
Piotr Stańczyk ◽  
Michał Zając ◽  
Olivier Bachem ◽  
...  

Recent progress in the field of reinforcement learning has been accelerated by virtual learning environments such as video games, where novel algorithms and ideas can be quickly tested in a safe and reproducible manner. We introduce the Google Research Football Environment, a new reinforcement learning environment where agents are trained to play football in an advanced, physics-based 3D simulator. The resulting environment is challenging, easy to use and customize, and it is available under a permissive open-source license. In addition, it provides support for multiplayer and multi-agent experiments. We propose three full-game scenarios of varying difficulty with the Football Benchmarks and report baseline results for three commonly used reinforcement algorithms (IMPALA, PPO, and Ape-X DQN). We also provide a diverse set of simpler scenarios with the Football Academy and showcase several promising research directions.


Author(s):  
Eloi Alonso ◽  
Maxim Peter ◽  
David Goumard ◽  
Joshua Romoff

In video games, \non-player characters (NPCs) are used to enhance the players' experience in a variety of ways, e.g., as enemies, allies, or innocent bystanders. A crucial component of NPCs is navigation, which allows them to move from one point to another on the map. The most popular approach for NPC navigation in the video game industry is to use a navigation mesh (NavMesh), which is a graph representation of the map, with nodes and edges indicating traversable areas. Unfortunately, complex navigation abilities that extend the character's capacity for movement, e.g., grappling hooks, jetpacks, teleportation, or double-jumps, increase the complexity of the NavMesh, making it intractable in many practical scenarios. Game designers are thus constrained to only add abilities that can be handled by a NavMesh. As an alternative to the NavMesh, we propose to use Deep Reinforcement Learning (Deep RL) to learn how to navigate 3D maps in video games using any navigation ability. We test our approach on complex 3D environments that are notably an order of magnitude larger than maps typically used in the Deep RL literature. One of these environments is from a recently released AAA video game called Hyper Scape. We find that our approach performs surprisingly well, achieving at least 90% success rate in a variety of scenarios using complex navigation abilities.


2014 ◽  
Vol 26 (1) ◽  
pp. 45-63 ◽  
Author(s):  
Matthew E. Taylor ◽  
Nicholas Carboni ◽  
Anestis Fachantidis ◽  
Ioannis Vlahavas ◽  
Lisa Torrey

Sign in / Sign up

Export Citation Format

Share Document