scholarly journals Multiple Action Sequence Learning and Automatic Generation for a Humanoid Robot Using RNNPB and Reinforcement Learning

2012 ◽  
Vol 05 (12) ◽  
pp. 128-133
Author(s):  
Takashi Kuremoto ◽  
Koichi Hashiguchi ◽  
Keita Morisaki ◽  
Shun Watanabe ◽  
Kunikazu Kobayashi ◽  
...  
2019 ◽  
Author(s):  
Eric Garr

Animals engage in intricately woven and choreographed action sequences that are constructed from trial-and-error learning. The mechanisms by which the brain links together individual actions which are later recalled as fluid chains of behavior are not fully understood, but there is broad consensus that the basal ganglia play a crucial role in this process. This paper presents a comprehensive review of the role of the basal ganglia in action sequencing, with a focus on whether the computational framework of reinforcement learning can capture key behavioral features of sequencing and the neural mechanisms that underlie them. While a simple neurocomputational model of reinforcement learning can capture key features of action sequence learning, this model is not sufficient to capture goal-directed control of sequences or their hierarchical representation. The hierarchical structure of action sequences, in particular, poses a challenge for building better models of action sequencing, and it is in this regard that further investigations into basal ganglia information processing may be informative.


Author(s):  
Heecheol Kim ◽  
Masanori Yamada ◽  
Kosuke Miyoshi ◽  
Tomoharu Iwata ◽  
Hiroshi Yamakawa

Author(s):  
James Cunningham ◽  
Christian Lopez ◽  
Omar Ashour ◽  
Conrad S. Tucker

Abstract In this work, a Deep Reinforcement Learning (RL) approach is proposed for Procedural Content Generation (PCG) that seeks to automate the generation of multiple related virtual reality (VR) environments for enhanced personalized learning. This allows for the user to be exposed to multiple virtual scenarios that demonstrate a consistent theme, which is especially valuable in an educational context. RL approaches to PCG offer the advantage of not requiring training data, as opposed to other PCG approaches that employ supervised learning approaches. This work advances the state of the art in RL-based PCG by demonstrating the ability to generate a diversity of contexts in order to teach the same underlying concept. A case study is presented that demonstrates the feasibility of the proposed RL-based PCG method using examples of probability distributions in both manufacturing facility and grocery store virtual environments. The method demonstrated in this paper has the potential to enable the automatic generation of a variety of virtual environments that are connected by a common concept or theme.


Sign in / Sign up

Export Citation Format

Share Document