Multistability Analysis for Recurrent Neural Networks with Unsaturating Piecewise Linear Transfer Functions

2003 ◽  
Vol 15 (3) ◽  
pp. 639-662 ◽  
Author(s):  
Zhang Yi ◽  
K. K. Tan ◽  
T. H. Lee

Multistability is a property necessary in neural networks in order to enable certain applications (e.g., decision making), where monostable networks can be computationally restrictive. This article focuses on the analysis of multistability for a class of recurrent neural networks with unsaturating piecewise linear transfer functions. It deals fully with the three basic properties of a multistable network: boundedness, global attractivity, and complete convergence. This article makes the following contributions: conditions based on local inhibition are derived that guarantee boundedness of some multistable networks, conditions are established for global attractivity, bounds on global attractive sets are obtained, complete convergence conditions for the network are developed using novel energy-like functions, and simulation examples are employed to illustrate the theory thus developed.

2001 ◽  
Vol 13 (8) ◽  
pp. 1811-1825 ◽  
Author(s):  
Heiko Wersing ◽  
Wolf-Jürgen Beyn ◽  
Helge Ritter

We establish two conditions that ensure the nondivergence of additive recurrent networks with unsaturating piecewise linear transfer functions, also called linear threshold or semilinear transfer functions. As Hahn-loser, Sarpeshkar, Mahowald, Douglas, and Seung (2000) showed, networks of this type can be efficiently built in silicon and exhibit the coexistence of digital selection and analog amplification in a single circuit. To obtain this behavior, the network must be multistable and nondivergent, and our conditions allow determining the regimes where this can be achieved with maximal recurrent amplification. The first condition can be applied to nonsymmetric networks and has a simple interpretation of requiring that the strength of local inhibition match the sum over excitatory weights converging onto a neuron. The second condition is restricted to symmetric networks, but can also take into account the stabilizing effect of nonlocal inhibitory interactions. We demonstrate the application of the conditions on a simple example and the orientation-selectivity model of Ben-Yishai, Lev Bar-Or, and Sompolinsky (1995). We show that the conditions can be used to identify in their model regions of maximal orientation-selective amplification and symmetry breaking.


2013 ◽  
Vol 2013 ◽  
pp. 1-16 ◽  
Author(s):  
Xiaohong Wang ◽  
Huan Qi

This paper is concerned with the robust dissipativity problem for interval recurrent neural networks (IRNNs) with general activation functions, and continuous time-varying delay, and infinity distributed time delay. By employing a new differential inequality, constructing two different kinds of Lyapunov functions, and abandoning the limitation on activation functions being bounded, monotonous and differentiable, several sufficient conditions are established to guarantee the global robust exponential dissipativity for the addressed IRNNs in terms of linear matrix inequalities (LMIs) which can be easily checked by LMI Control Toolbox in MATLAB. Furthermore, the specific estimation of positive invariant and global exponential attractive sets of the addressed system is also derived. Compared with the previous literatures, the results obtained in this paper are shown to improve and extend the earlier global dissipativity conclusions. Finally, two numerical examples are provided to demonstrate the potential effectiveness of the proposed results.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


Sign in / Sign up

Export Citation Format

Share Document