Evidence for a specific internal representation of motion-force relationships during object manipulation

2003 ◽  
Vol 88 (1) ◽  
pp. 60-72 ◽  
Author(s):  
Christopher D. Mah ◽  
Ferdinando A. Mussa-Ivaldi
1973 ◽  
Author(s):  
J. Barsaloux ◽  
T. J. Bouchard ◽  
S. Bush

2012 ◽  
Author(s):  
Daniel J. Weiss ◽  
Kate Chapman
Keyword(s):  

Author(s):  
Joseph F. Boudreau ◽  
Eric S. Swanson

Built-in datatypes and C++ classes are introduced in this chapter, and discussed in relation to the important notion of encapsulation, which refers to the separation between the internal representation of the datatype and the operations to which it responds. Encapsulation later becomes an important consideration in the design of custom C++ classes that programmers develop themselves. It is illustrated with built-in floating-point datatypes float and double and with the complex class from the C++ standard library. While a sophisticated programmer is aware of the internal representation of data and its resulting limitations, encapsulation allows one to consider these as details and frees one to think at a higher level of program design. Some simple numerical examples are discussed in the text and in the exercises.


2009 ◽  
Vol 80 (6) ◽  
pp. 065106 ◽  
Author(s):  
Mohd Nashrul Mohd Zubir ◽  
Bijan Shirinzadeh ◽  
Yanling Tian
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2678
Author(s):  
Sergey A. Lobov ◽  
Alexey I. Zharinov ◽  
Valeri A. Makarov ◽  
Victor B. Kazantsev

Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.


Sign in / Sign up

Export Citation Format

Share Document