Reinforcement Learning in Latent Action Sequence Space

Author(s):  
Heecheol Kim ◽  
Masanori Yamada ◽  
Kosuke Miyoshi ◽  
Tomoharu Iwata ◽  
Hiroshi Yamakawa
2019 ◽  
Author(s):  
Eric Garr

Animals engage in intricately woven and choreographed action sequences that are constructed from trial-and-error learning. The mechanisms by which the brain links together individual actions which are later recalled as fluid chains of behavior are not fully understood, but there is broad consensus that the basal ganglia play a crucial role in this process. This paper presents a comprehensive review of the role of the basal ganglia in action sequencing, with a focus on whether the computational framework of reinforcement learning can capture key behavioral features of sequencing and the neural mechanisms that underlie them. While a simple neurocomputational model of reinforcement learning can capture key features of action sequence learning, this model is not sufficient to capture goal-directed control of sequences or their hierarchical representation. The hierarchical structure of action sequences, in particular, poses a challenge for building better models of action sequencing, and it is in this regard that further investigations into basal ganglia information processing may be informative.


2012 ◽  
Vol 05 (12) ◽  
pp. 128-133
Author(s):  
Takashi Kuremoto ◽  
Koichi Hashiguchi ◽  
Keita Morisaki ◽  
Shun Watanabe ◽  
Kunikazu Kobayashi ◽  
...  

2007 ◽  
Vol 04 (02) ◽  
pp. 211-243 ◽  
Author(s):  
YOONSUCK CHOE ◽  
HUEI-FANG YANG ◽  
DANIEL CHERN-YEOW ENG

What is available to developmental programs in autonomous mental development, and what should be learned at the very early stages of mental development? Our observation is that sensory and motor primitives are the most basic components present at the beginning, and what developmental agents need to learn from these resources is what their internal sensory states stand for. In this paper, we investigate the question in the context of a simple biologically motivated visuomotor agent. We observe and acknowledge, as many other researchers do, that action plays a key role in providing content to the sensory state. We propose a simple, yet powerful learning criterion, that of invariance, where invariance simply means that the internal state does not change over time. We show that after reinforcement learning based on the invariance criterion, the property of action sequence based on an internal sensory state accurately reflects the property of the stimulus that triggered that internal state. That way, the meaning of the internal sensory state can be firmly grounded on the property of that particular action sequence. We expect the framing of the problem and the proposed solution presented in this paper to help shed new light on autonomous understanding in developmental agents such as humanoid robots.


Decision ◽  
2016 ◽  
Vol 3 (2) ◽  
pp. 115-131 ◽  
Author(s):  
Helen Steingroever ◽  
Ruud Wetzels ◽  
Eric-Jan Wagenmakers

Sign in / Sign up

Export Citation Format

Share Document