Object-Oriented State Abstraction in Reinforcement Learning for Video Games

Author(s):  
Yu Chen ◽  
Huizhuo Yuan ◽  
Yujun Li
Author(s):  
Rodrigo Bonini ◽  
Felipe Leno Da Silva ◽  
Ruben Glatt ◽  
Edison Spina ◽  
Anna Helena Reali Costa

Author(s):  
Carlos Diuk ◽  
Michael Littman

Reinforcement learning (RL) deals with the problem of an agent that has to learn how to behave to maximize its utility by its interactions with an environment (Sutton & Barto, 1998; Kaelbling, Littman & Moore, 1996). Reinforcement learning problems are usually formalized as Markov Decision Processes (MDP), which consist of a finite set of states and a finite number of possible actions that the agent can perform. At any given point in time, the agent is in a certain state and picks an action. It can then observe the new state this action leads to, and receives a reward signal. The goal of the agent is to maximize its long-term reward. In this standard formalization, no particular structure or relationship between states is assumed. However, learning in environments with extremely large state spaces is infeasible without some form of generalization. Exploiting the underlying structure of a problem can effect generalization and has long been recognized as an important aspect in representing sequential decision tasks (Boutilier et al., 1999). Hierarchical Reinforcement Learning is the subfield of RL that deals with the discovery and/or exploitation of this underlying structure. Two main ideas come into play in hierarchical RL. The first one is to break a task into a hierarchy of smaller subtasks, each of which can be learned faster and easier than the whole problem. Subtasks can also be performed multiple times in the course of achieving the larger task, reusing accumulated knowledge and skills. The second idea is to use state abstraction within subtasks: not every task needs to be concerned with every aspect of the state space, so some states can actually be abstracted away and treated as the same for the purpose of the given subtask.


2021 ◽  
Author(s):  
Anssi Kanervisto ◽  
Christian Scheller ◽  
Yanick Schraner ◽  
Ville Hautamaki

Author(s):  
Wang Meng ◽  
Chen Yingfeng ◽  
Lv Tangjie ◽  
Song Yan ◽  
Guan Kai ◽  
...  

2021 ◽  
pp. 1-1
Author(s):  
Pei Xu ◽  
Qiyue Yin ◽  
Junge Zhang ◽  
Kaiqi Huang

2014 ◽  
Vol 18 (6) ◽  
pp. 1153-1175
Author(s):  
Nahid Taherian ◽  
Mohammad Ebrahim Shiri

Sign in / Sign up

Export Citation Format

Share Document