Existence and Stability of Traveling Fronts in a Lateral Inhibition Neural Network

2012 ◽  
Vol 11 (4) ◽  
pp. 1543-1582 ◽  
Author(s):  
Yixin Guo
2013 ◽  
Vol 43 (6) ◽  
pp. 2082-2092 ◽  
Author(s):  
Bruno Jose Torres Fernandes ◽  
George D. C. Cavalcanti ◽  
Tsang Ing Ren

2002 ◽  
Vol 14 (9) ◽  
pp. 2157-2179 ◽  
Author(s):  
M. W. Spratling ◽  
M. H. Johnson

A large and influential class of neural network architectures uses postintegration lateral inhibition as a mechanism for competition. We argue that these algorithms are computationally deficient in that they fail to generate, or learn, appropriate perceptual representations under certain circumstances. An alternative neural network architecture is presented here in which nodes compete for the right to receive inputs rather than for the right to generate outputs. This form of competition, implemented through preintegration lateral inhibition, does provide appropriate coding properties and can be used to learn such representations efficiently. Furthermore, this architecture is consistent with both neuroanatomical and neurophysiological data. We thus argue that preintegration lateral inhibition has computational advantages over conventional neural network architectures while remaining equally biologically plausible.


2014 ◽  
Vol 24 (06) ◽  
pp. 1165-1195 ◽  
Author(s):  
Emeric Bouin ◽  
Vincent Calvez ◽  
Grégoire Nadin

We perform the analysis of a hyperbolic model which is the analog of the Fisher-KPP equation. This model accounts for particles that move at maximal speed ϵ-1 (ϵ > 0), and proliferate according to a reaction term of monostable type. We study the existence and stability of traveling fronts. We exhibit a transition depending on the parameter ϵ: for small ϵ the behavior is essentially the same as for the diffusive Fisher-KPP equation. However, for large ϵ the traveling front with minimal speed is discontinuous and travels at the maximal speed ϵ-1. The traveling fronts with minimal speed are linearly stable in weighted L2 spaces. We also prove local nonlinear stability of the traveling front with minimal speed when ϵ is smaller than the transition parameter.


1996 ◽  
Vol 75 (3) ◽  
pp. 967-985 ◽  
Author(s):  
F. C. Rind ◽  
D. I. Bramwell

1. We describe a four-layered neural network (Fig. 1), based on the input organization of a collision signaling neuron in the visual system of the locust, the lobula giant movement detector (LGMD). The 250 photoreceptors ("P" units) in layer 1 are excited by any change in illumination, generated when an image edge passes over them. Layers 2 and 3 incorporate both excitatory and inhibitory interactions, and layer 4 consists of a single output element, equivalent to the locust LGMD. 2. The output element of the neural network, the "LGMD", responds directionally when challenged with approaching versus receding objects, preferring approaching objects (Figs. 2-4). The time course and shape of the "LGMD" response matches that of the LGMD (Fig. 4). Directionality is maintained with objects of various sizes and approach velocities. The network is tuned to direct approach (Fig. 5). The "LGMD" shows no directional selectivity for translatory motion at a constant velocity across the "eye", but its response increases with edge velocity (Figs. 6 and 9). 3. The critical image cues for a selective response to object approach by the "LGMD" are edges that change in extent or in velocity as they move (Fig. 7). Lateral inhibition is crucial to the selectivity of the "LGMD" and the selective response is abolished or else much reduced if lateral inhibition is taken out of the network (Fig. 7). We conclude that lateral inhibition in the neuronal network for the locust LGMD also underlies the experimentally observed critical image cues for its directional response. 4. Lateral inhibition shapes the velocity tuning of the network for objects moving in the X and Y directions without approaching the eye (see Fig. 1). As an edge moves over the eye at a constant velocity, a race occurs between the excitation that is caused by edge movement and which passes down the network and the inhibition that passes laterally. Excitation must win this race for units in layer 3 to reach threshold (Fig. 8). The faster the edge moves over the eye the more units in layer 3 reach threshold and pass excitation on to the "LGMD" (Fig. 9). 5. Lateral inhibition shapes the tuning of the network for objects moving in the Z direction, toward or away from the eye (see Fig. 1). As an object approaches the eye there is a buildup of excitation in the "LGMD" throughout the movement whereas the response to object recession is often brief, particularly for high velocities. During object motion, a critical race occurs between excitation passing down the network and inhibition directed laterally, excitation must win this race for the rapid buildup in excitation in the "LGMD" as seen in the final stages of object approach (Figs. 10-12). The buildup is eliminated if, during object approach, excitation cannot win this race (as happens when the spread of inhibition laterally takes < 1 ms Fig. 13, D and E). Taking all lateral inhibition away increases the "LGMD" response to object approach, but overall directional selectivity is reduced as there is also a lot of residual network excitation following object recession (Fig. 13B). 6. Directional selectivity for rapidly approaching objects is further enhanced at the level of the "LGMD" by the timing of a feed-forward, inhibitory loop onto the "LGMD", activated when a large number of receptor units are excited in a short time. The inhibitory loop is activated at the end of object approach, truncating the excitatory "LGMD" response after approach has ceased, but at the initiation of object recession (*Fig. 2, 3, and 13). Eliminating the feed-forward, inhibitory loop prolongs the "LGMD" response to both receding and approaching objects (Fig. 13F).


1989 ◽  
Vol 01 (02) ◽  
pp. 177-186
Author(s):  
Atilla E. Gunhan ◽  
László P. Csernai ◽  
Jørgen Randrup

We study an idealized neural network that may approximate certain neurophysiological features of natural neural systems. The network contains a mutual lateral inhibition and is subjected to unsupervised learning by means of a Hebb-type learning principle. Its learning ability is analysed as a function of the strength of lateral inhibition and the training set.


2016 ◽  
Author(s):  
Daniele Avitabile ◽  
Kyle C. A. Wedgwood

We study coarse pattern formation in a cellular automaton modelling a spatially-extended stochastic neural network. The model, originally proposed by Gong and Robinson [36], is known to support stationary and travelling bumps of localised activity. We pose the model on a ring and study the existence and stability of these patterns in various limits using a combination of analytical and numerical techniques. In a purely deterministic version of the model, posed on a continuum, we construct bumps and travelling waves analytically using standard interface methods from neural fields theory. In a stochastic version with Heaviside firing rate, we construct approximate analytical probability mass functions associated with bumps and travelling waves. In the full stochastic model posed on a discrete lattice, where a coarse analytic description is unavailable, we compute patterns and their linear stability using equation-free methods. The lifting procedure used in the coarse time-stepper is informed by the analysis in the deterministic and stochastic limits. In all settings, we identify the synaptic profile as a mesoscopic variable, and the width of the corresponding activity set as a macroscopic variable. Stationary and travelling bumps have similar meso- and macroscopic profiles, but different microscopic structure, hence we propose lifting operators which use microscopic motifs to disambiguate between them. We provide numerical evidence that waves are supported by a combination of high synaptic gain and long refractory times, while meandering bumps are elicited by short refractory times.


Sign in / Sign up

Export Citation Format

Share Document