Adding Lateral Inhibition to a Simple Feedforward Network Enables It to Perform Exclusive-Or

1998 ◽  
Vol 10 (2) ◽  
pp. 277-280
Author(s):  
Leslie S. Smith

A simple laterally inhibited recurrent network that implementse xclusive-or is demonstrated. The network consists of two mutually inhibitory units with logistic output function, each receiving one external input and each connected to a simple threshold output unit. The mutually inhibitory units settle into a point attractor. We investigate the range of steepness of the logistic and the range of inhibitory weights for which the network can perform exclusive-or.

1997 ◽  
Author(s):  
William T. Farrar ◽  
Guy C. Van Orden

1982 ◽  
Vol 99 (1) ◽  
pp. 61-90 ◽  
Author(s):  
DONALD H. EDWARDS

1. The responses of the cockroach descending contralateral movement detector (DCMD) neurone to moving light stimuli were studied under both light- and dark-adapted conditions. 2. With light-adaptation the response of the DCMD to two moving 2° (diam.) spots of white light is less than the response to a single spot when the two spots are separated by less than 10° (Fig. 2). 3. With light-adaptation the response of the DCMD to a single moving light spot is a sigmoidally shaped function of the logarithm of the light intensity (Fig. 3a). With dark-adaptation the response of a DCMD to a single moving light spot is a bell-shaped function of the logarithm of the stimulus intensity (Fig. 3b). The absolute intensity that evokes a threshold response is about one-and-a-half log units less in the dark-adapted eye than in the light-adapted eye. 4. The decrease in the DCMD's response that occurs when two stimuli are closer than 10°, and when a single bright stimulus is made brighter, indicates that lateral inhibition operates among the afferents to the DCMD. 5. It is shown that this inhibition cannot be produced by a recurrent lateral inhibitory network. A model of the afferent path that contains a non-recurrent lateral inhibitory network can account for the response/intensity plots of the DCMD recorded under both light-adapted and dark-adapted conditions. 6. The threshold intensity of the DCMD is increased if a stationary pattern of light is present near the path of the moving spot stimulus. This is shown to be due to a peripheral tonic lateral inhibition that is distinct from the non-recurrent lateral inhibition described earlier. 7. It is suggested that the peripheral lateral inhibition acts to adjust the threshold of afferents to local background light levels, while the proximal non-recurrent network acts to enhance the acuity of the eye to small objects in the visual field, and to filter out whole-field stimuli.


2007 ◽  
Vol 97 (6) ◽  
pp. 3859-3867 ◽  
Author(s):  
Hiroshi Okamoto ◽  
Yoshikazu Isomura ◽  
Masahiko Takada ◽  
Tomoki Fukai

Temporal integration of externally or internally driven information is required for a variety of cognitive processes. This computation is generally linked with graded rate changes in cortical neurons, which typically appear during a delay period of cognitive task in the prefrontal and other cortical areas. Here, we present a neural network model to produce graded (climbing or descending) neuronal activity. Model neurons are interconnected randomly by AMPA-receptor–mediated fast excitatory synapses and are subject to noisy background excitatory and inhibitory synaptic inputs. In each neuron, a prolonged afterdepolarizing potential follows every spike generation. Then, driven by an external input, the individual neurons display bimodal rate changes between a baseline state and an elevated firing state, with the latter being sustained by regenerated afterdepolarizing potentials. When the variance of background input and the uniform weight of recurrent synapses are adequately tuned, we show that stochastic noise and reverberating synaptic input organize these bimodal changes into a sequence that exhibits graded population activity with a nearly constant slope. To test the validity of the proposed mechanism, we analyzed the graded activity of anterior cingulate cortex neurons in monkeys performing delayed conditional Go/No-go discrimination tasks. The delay-period activities of cingulate neurons exhibited bimodal activity patterns and trial-to-trial variability that are similar to those predicted by the proposed model.


2012 ◽  
Vol 108 (2) ◽  
pp. 513-527 ◽  
Author(s):  
Mark A. Bourjaily ◽  
Paul Miller

Animals must often make opposing responses to similar complex stimuli. Multiple sensory inputs from such stimuli combine to produce stimulus-specific patterns of neural activity. It is the differences between these activity patterns, even when small, that provide the basis for any differences in behavioral response. In the present study, we investigate three tasks with differing degrees of overlap in the inputs, each with just two response possibilities. We simulate behavioral output via winner-takes-all activity in one of two pools of neurons forming a biologically based decision-making layer. The decision-making layer receives inputs either in a direct stimulus-dependent manner or via an intervening recurrent network of neurons that form the associative layer, whose activity helps distinguish the stimuli of each task. We show that synaptic facilitation of synapses to the decision-making layer improves performance in these tasks, robustly increasing accuracy and speed of responses across multiple configurations of network inputs. Conversely, we find that synaptic depression worsens performance. In a linearly nonseparable task with exclusive-or logic, the benefit of synaptic facilitation lies in its superlinear transmission: effective synaptic strength increases with presynaptic firing rate, which enhances the already present superlinearity of presynaptic firing rate as a function of stimulus-dependent input. In linearly separable single-stimulus discrimination tasks, we find that facilitating synapses are always beneficial because synaptic facilitation always enhances any differences between inputs. Thus we predict that for optimal decision-making accuracy and speed, synapses from sensory or associative areas to decision-making or premotor areas should be facilitating.


2010 ◽  
Vol 22 (3) ◽  
pp. 621-659 ◽  
Author(s):  
Bryan P. Tripp ◽  
Chris Eliasmith

Temporal derivatives are computed by a wide variety of neural circuits, but the problem of performing this computation accurately has received little theoretical study. Here we systematically compare the performance of diverse networks that calculate derivatives using cell-intrinsic adaptation and synaptic depression dynamics, feedforward network dynamics, and recurrent network dynamics. Examples of each type of network are compared by quantifying the errors they introduce into the calculation and their rejection of high-frequency input noise. This comparison is based on both analytical methods and numerical simulations with spiking leaky-integrate-and-fire (LIF) neurons. Both adapting and feedforward-network circuits provide good performance for signals with frequency bands that are well matched to the time constants of postsynaptic current decay and adaptation, respectively. The synaptic depression circuit performs similarly to the adaptation circuit, although strictly speaking, precisely linear differentiation based on synaptic depression is not possible, because depression scales synaptic weights multiplicatively. Feedback circuits introduce greater errors than functionally equivalent feedforward circuits, but they have the useful property that their dynamics are determined by feedback strength. For this reason, these circuits are better suited for calculating the derivatives of signals that evolve on timescales outside the range of membrane dynamics and, possibly, for providing the wide range of timescales needed for precise fractional-order differentiation.


2009 ◽  
Vol 21 (4) ◽  
pp. 1038-1067 ◽  
Author(s):  
Takuma Tanaka ◽  
Takeshi Kaneko ◽  
Toshio Aoyagi

Recently multineuronal recording has allowed us to observe patterned firings, synchronization, oscillation, and global state transitions in the recurrent networks of central nervous systems. We propose a learning algorithm based on the process of information maximization in a recurrent network, which we call recurrent infomax (RI). RI maximizes information retention and thereby minimizes information loss through time in a network. We find that feeding in external inputs consisting of information obtained from photographs of natural scenes into an RI-based model of a recurrent network results in the appearance of Gabor-like selectivity quite similar to that existing in simple cells of the primary visual cortex. We find that without external input, this network exhibits cell assembly–like and synfire chain–like spontaneous activity as well as a critical neuronal avalanche. In addition, we find that RI embeds externally input temporal firing patterns to the network so that it spontaneously reproduces these patterns after learning. RI provides a simple framework to explain a wide range of phenomena observed in in vivo and in vitro neuronal networks, and it will provide a novel understanding of experimental results for multineuronal activity and plasticity from an information-theoretic point of view.


1991 ◽  
Vol 138 (2) ◽  
pp. 93 ◽  
Author(s):  
W.H. Debany ◽  
C.R.P. Hartmann ◽  
T.J. Snethen
Keyword(s):  

2020 ◽  
Author(s):  
Luca Rade

Emulators are internal models, first evolved for prediction in perception to shorten the feedback on motor action. However, the selective pressure on perception is to improve the fitness of decision-making, driving the evolution of emulators towards context-dependent payoff representation and integration of action planning, not enhanced prediction as is generally assumed. The result is integrated perceptual, memory, representational, and imaginative capacities processing external input and stored internal input for decision-making, while simultaneously updating stored information. Perception, recall, imagination, theory of mind, and dreaming are the same process with different inputs. Learning proceeds via scaffolding on existing conceptual infrastructure, a weak form of embodied cognition. Discrete concepts are emergent from continuous dynamics and are in a perceptual, not representational, format. Language is also in perceptual format and enables precise abstract thought. In sum, what was initially a primitive system for short-term prediction in perception has evolved to perform abstract thought, store and retrieve memory, understand others, hold embedded action plans, build stable narratives, simulate scenarios, and integrate context dependence into perception. Crucially, emulators co-evolved with the emergence of societies, producing a mind-society system in which emulators are dysfunctional unless integrated into a society, which enables their complexity. The Target Emulator System, evolved initially for honest signaling, produces the emergent dynamics of the mind-society system and spreads variation-testing of behavior and thought patterns across a population. The human brain is the most dysfunctional in isolation, but the most effective given its context.


Sign in / Sign up

Export Citation Format

Share Document