scholarly journals Organizing memories for generalization in complementary learning systems

2021 ◽  
Author(s):  
Weinan Sun ◽  
Madhu Advani ◽  
Nelson Spruston ◽  
Andrew Saxe ◽  
James E Fitzgerald

Our ability to remember the past is essential for guiding our future behavior. Psychological and neurobiological features of declarative memories are known to transform over time in a process known as systems consolidation. While many theories have sought to explain the time-varying role of hippocampal and neocortical brain areas, the computational principles that govern these transformations remain unclear. Here we propose a theory of systems consolidation in which hippocampal-cortical interactions serve to optimize generalizations that guide future adaptive behavior. We use mathematical analysis of neural network models to characterize fundamental performance tradeoffs in systems consolidation, revealing that memory components should be organized according to their predictability. The theory shows that multiple interacting memory systems can outperform just one, normatively unifying diverse experimental observations and making novel experimental predictions. Our results suggest that the psychological taxonomy and neurobiological organization of declarative memories reflect a system optimized for behaving well in an uncertain future.

2013 ◽  
Vol 4 (1) ◽  
pp. 32-64 ◽  
Author(s):  
Elisa C. Castro ◽  
Ricardo R. Gudwin

In this paper the authors present the development of a scene-based episodic memory module for the cognitive architecture controlling an autonomous virtual creature, in a simulated 3D environment. The scene-based episodic memory has the role of improving the creature’s navigation system, by evoking the objects to be considered in planning, according to episodic remembrance of earlier scenes testified by the creature where these objects were present in the past. They introduce the main background on human memory systems and episodic memory study, and provide the main ideas behind the experiment.


1997 ◽  
Vol 119 (2) ◽  
pp. 247-254 ◽  
Author(s):  
J. Mou

A method using artificial neural networks and inverse kinematics for machine tool error correction is presented. A generalized error model is derived, by using rigid body kinematics, to describe the error motion between the cutting tool and workpiece at discrete temperature conditions. Neural network models are then built to track the time-varying machine tool errors at various thermal conditions. The output of the neural network models can be used to periodically modify, using inverse kinematics technique, the error model’s coefficients as the cutting processes proceeded. Thus, the time-varying positioning errors at other points within the designated workspace can be estimated. Experimental results show that the time-varying machine tool errors can be estimated and corrected with desired accuracy. The estimated errors resulted from the proposed methodology could be used to adjust the depth of cut on the finish pass, or correct the probing data for process-intermittent inspection to improve the accuracy of workpieces.


2016 ◽  
Vol 57 ◽  
pp. 345-420 ◽  
Author(s):  
Yoav Goldberg

Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.


2006 ◽  
Vol 2006 ◽  
pp. 1-25 ◽  
Author(s):  
Xiaofeng Liao ◽  
Xiaofan Yang ◽  
Wei Zhang

We study the dynamical behavior of a class of neural network models with time-varying delays. By constructing suitable Lyapunov functionals, we obtain sufficient delay-dependent criteria to ensure local and global asymptotic stability of the equilibrium of the neural network. Our results are applied to a two-neuron system with delayed connections between neurons, and some novel asymptotic stability criteria are also derived. The obtained conditions are shown to be less conservative and restrictive than those reported in the known literature. Some numerical examples are included to demonstrate our results.


2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Weisong Zhou ◽  
Zhichun Yang

A class of dynamical neural network models with time-varying delays is considered. By employing the Lyapunov-Krasovskii functional method and linear matrix inequalities (LMIs) technique, some new sufficient conditions ensuring the input-to-state stability (ISS) property of the nonlinear network systems are obtained. Finally, numerical examples are provided to illustrate the efficiency of the derived results.


Sign in / Sign up

Export Citation Format

Share Document