Developmental Network-2: The Autonomous Generation of Optimal Internal-Representation Hierarchy

Author(s):  
Xiang Wu ◽  
Zejia Zheng ◽  
Juyang Weng
2016 ◽  
Vol 25 (04) ◽  
pp. 1650022 ◽  
Author(s):  
Dongshu Wang ◽  
Yihai Duan

Natural language understanding plays an important role in our daily life. It is very significant to study how to make the computer understand the human language and produce the corresponding action or response. Most of the prior language acquisition models adopt handcrafted internal representation, and they are not sufficiently brain-based and not sufficiently comprehensive to account for all branches in psychology and cognitive science. An emergent developmental network (DN) is used to learn, infer and think a knowledge base represented as a finite automaton, from sensory and motor experience grounded in this operational environments. This work is different in the sense that we emphasize on the mechanism that enable a system to develop its emergent representations from its operational experience. By emergent, we mean a pattern of responses of multiple elements that corresponds to an event outside the closed skull but each element (e.g. pixel, muscle, neuron) of the pattern typically does not have a meaning. In this work, internal unsupervised neurons of the DN are used to represent short contexts, and the competitions among internal neurons enable them to represent different short contexts. By internal, we mean that all the neurons inside a brain are not directly supervised by the external environment — outside the brain skull. In this work, we analyze how internal neurons represent temporal contexts and how the feature neurons of the DN represent earlier contexts. Accuracy of Z state inferring and $X$ thinking of a relative complex training sequence (denoted as DN-2 in this work) can reach 100% and 75%, respectively. Comparative experiment results between this emergent method and the symbolic method, their corresponding Z state inferring and $X$ thinking accuracy are 100% and 82.1%, 85.7% and 75%, respectively (taking DN-6 in this work as the example), demonstrate the efficiency of the DN on natural language inferring and thinking. Complexity of the finite automaton is low and so is the temporal contexts, but the same principle is potentially applicable to more complex cases.


Author(s):  
Joseph F. Boudreau ◽  
Eric S. Swanson

Built-in datatypes and C++ classes are introduced in this chapter, and discussed in relation to the important notion of encapsulation, which refers to the separation between the internal representation of the datatype and the operations to which it responds. Encapsulation later becomes an important consideration in the design of custom C++ classes that programmers develop themselves. It is illustrated with built-in floating-point datatypes float and double and with the complex class from the C++ standard library. While a sophisticated programmer is aware of the internal representation of data and its resulting limitations, encapsulation allows one to consider these as details and frees one to think at a higher level of program design. Some simple numerical examples are discussed in the text and in the exercises.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2678
Author(s):  
Sergey A. Lobov ◽  
Alexey I. Zharinov ◽  
Valeri A. Makarov ◽  
Victor B. Kazantsev

Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.


Perception ◽  
1998 ◽  
Vol 27 (1) ◽  
pp. 69-86 ◽  
Author(s):  
Michel-Ange Amorim ◽  
Jack M Loomis ◽  
Sergio S Fukusima

An unfamiliar configuration lying in depth and viewed from a distance is typically seen as foreshortened. The hypothesis motivating this research was that a change in an observer's viewpoint even when the configuration is no longer visible induces an imaginal updating of the internal representation and thus reduces the degree of foreshortening. In experiment 1, observers attempted to reproduce configurations defined by three small glowing balls on a table 2 m distant under conditions of darkness following ‘viewpoint change’ instructions. In one condition, observers reproduced the continuously visible configuration using three other glowing balls on a nearer table while imagining standing at the distant table. In the other condition, observers viewed the configuration, it was then removed, and they walked in darkness to the far table and reproduced the configuration. Even though the observers received no additional information about the stimulus configuration in walking to the table, they were more accurate (less foreshortening) than in the other condition. In experiment 2, observers reproduced distant configurations on a nearer table more accurately when doing so from memory than when doing so while viewing the distant stimulus configuration. In experiment 3, observers performed both the real and imagined perspective change after memorizing the remote configuration. The results of the three experiments indicate that the continued visual presence of the target configuration impedes imaginary perspective-change performance and that an actual change in viewpoint does not increase reproduction accuracy substantially over that obtained with an imagined change in viewpoint.


2013 ◽  
Vol 110 (4) ◽  
pp. 984-998 ◽  
Author(s):  
Wilsaan M. Joiner ◽  
Jordan B. Brayanov ◽  
Maurice A. Smith

The way that a motor adaptation is trained, for example, the manner in which it is introduced or the duration of the training period, can influence its internal representation. However, recent studies examining the gradual versus abrupt introduction of a novel environment have produced conflicting results. Here we examined how these effects determine the effector specificity of motor adaptation during visually guided reaching. After adaptation to velocity-dependent dynamics in the right arm, we estimated the amount of adaptation transferred to the left arm, using error-clamp measurement trials to directly measure changes in learned dynamics. We found that a small but significant amount of generalization to the untrained arm occurs under three different training schedules: a short-duration (15 trials) abrupt presentation, a long-duration (160 trials) abrupt presentation, and a long-duration gradual presentation of the novel dynamic environment. Remarkably, we found essentially no difference between the amount of interlimb generalization when comparing these schedules, with 9–12% transfer of the trained adaptation for all three. However, the duration of training had a pronounced effect on the stability of the interlimb transfer: The transfer elicited from short-duration training decayed rapidly, whereas the transfer from both long-duration training schedules was considerably more persistent (<50% vs. >90% retention over the first 20 trials). These results indicate that the amount of interlimb transfer is similar for gradual versus abrupt training and that interlimb transfer of learned dynamics can occur after even a brief training period but longer training is required for an enduring effect.


Sign in / Sign up

Export Citation Format

Share Document