scholarly journals CREATING NOVEL GOAL-DIRECTED ACTIONS AT CRITICALITY: A NEURO-ROBOTIC EXPERIMENT

2009 ◽  
Vol 05 (01) ◽  
pp. 307-334 ◽  
Author(s):  
HIROAKI ARIE ◽  
TETSURO ENDO ◽  
TAKAFUMI ARAKAKI ◽  
SHIGEKI SUGANO ◽  
JUN TANI

The present study examines the possible roles of cortical chaos in generating novel actions for achieving specified goals. The proposed neural network model consists of a sensory-forward model responsible for parietal lobe functions, a chaotic network model for premotor functions and prefrontal cortex model responsible for manipulating the initial state of the chaotic network. Experiments using humanoid robot were performed with the model and showed that the action plans for satisfying specific novel goals can be generated by diversely modulating and combining prior-learned behavioral patterns at critical dynamical states. Although this criticality resulted in fragile goal achievements in the physical environment of the robot, the reinforcement of the successful trials was able to provide a substantial gain with respect to the robustness. The discussion leads to the hypothesis that the consolidation of numerous sensory-motor experiences into the memory, meditating diverse imagery in the memory by cortical chaos, and repeated enaction and reinforcement of newly generated effective trials are indispensable for realizing an open-ended development of cognitive behaviors.

2000 ◽  
Vol 14 (17) ◽  
pp. 1815-1824
Author(s):  
M. ANDRECUT ◽  
M. K. ALI

We describe a new biologically motivated model of the sensory-motor mechanism. The model is based on a self-organizing neural network with modifiable lateral interactions and a "master-slave" connection between the sensorial and motor modules. The results show that the described model is a useful feature that can be exploited by autonomous agents. An example of implementation in the case of a "moving virtual creature" is also presented.


2014 ◽  
Vol 41 ◽  
pp. 32-39 ◽  
Author(s):  
Suhas E. Chelian ◽  
Matthias D. Ziegler ◽  
Peter Pirolli ◽  
Rajan Bhattacharyya

2018 ◽  
Author(s):  
Zhongqiao Lin ◽  
Chechang Nie ◽  
Yuanfeng Zhang ◽  
Yang Chen ◽  
Tianming Yang

AbstractValue-based decision making is a process in which humans or animals maximize their gain by selecting appropriate options and performing the corresponding actions to acquire them. Whether the evaluation process of the options in the brain can be independent from their action contingency has been hotly debated. To address the question, we trained rhesus monkeys to make decisions by integrating evidence and studied whether the integration occurred in the stimulus or the action domain in the brain. After the monkeys learned the task, we recorded both from the orbitofrontal (OFC) and dorsolateral prefrontal (DLPFC) cortices. We found that the OFC neurons encoded the value associated with the single piece of evidence in the stimulus domain. Importantly, the representations of the value in the OFC was transient and the information was not integrated across time for decisions. The integration of evidence was observed only in the DLPFC and only in the action domain. We further used a neural network model to show how the stimulus-to-action transition of value information may be computed in the DLPFC. Our results indicated that the decision making in the brain is computed in the action domain without an intermediate stimulus-based decision stage.


Sign in / Sign up

Export Citation Format

Share Document