scholarly journals On Stationary Distributions of Stochastic Neural Networks

2014 ◽  
Vol 51 (3) ◽  
pp. 837-857
Author(s):  
K. Borovkov ◽  
G. Decrouez ◽  
M. Gilson

The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.

2014 ◽  
Vol 51 (03) ◽  
pp. 837-857
Author(s):  
K. Borovkov ◽  
G. Decrouez ◽  
M. Gilson

The paper deals with nonlinear Poisson neuron network models with bounded memory dynamics, which can include both Hebbian learning mechanisms and refractory periods. The state of the network is described by the times elapsed since its neurons fired within the post-synaptic transfer kernel memory span, and the current strengths of synaptic connections, the state spaces of our models being hierarchies of finite-dimensional components. We prove the ergodicity of the stochastic processes describing the behaviour of the networks, establish the existence of continuously differentiable stationary distribution densities (with respect to the Lebesgue measures of corresponding dimensionality) on the components of the state space, and find upper bounds for them. For the density components, we derive a system of differential equations that can be solved in a few simplest cases only. Approaches to approximate computation of the stationary density are discussed. One approach is to reduce the dimensionality of the problem by modifying the network so that each neuron cannot fire if the number of spikes it emitted within the post-synaptic transfer kernel memory span reaches a given threshold. We show that the stationary distribution of this ‘truncated’ network converges to that of the unrestricted network as the threshold increases, and that the convergence is at a superexponential rate. A complementary approach uses discrete Markov chain approximations to the network process.


AIP Advances ◽  
2016 ◽  
Vol 6 (11) ◽  
pp. 111305 ◽  
Author(s):  
A. Sboev ◽  
D. Vlasov ◽  
A. Serenko ◽  
R. Rybka ◽  
I. Moloshnikov

1994 ◽  
Vol 1 (1) ◽  
pp. 1-33
Author(s):  
P R Montague ◽  
T J Sejnowski

Some forms of synaptic plasticity depend on the temporal coincidence of presynaptic activity and postsynaptic response. This requirement is consistent with the Hebbian, or correlational, type of learning rule used in many neural network models. Recent evidence suggests that synaptic plasticity may depend in part on the production of a membrane permeant-diffusible signal so that spatial volume may also be involved in correlational learning rules. This latter form of synaptic change has been called volume learning. In both Hebbian and volume learning rules, interaction among synaptic inputs depends on the degree of coincidence of the inputs and is otherwise insensitive to their exact temporal order. Conditioning experiments and psychophysical studies have shown, however, that most animals are highly sensitive to the temporal order of the sensory inputs. Although these experiments assay the behavior of the entire animal or perceptual system, they raise the possibility that nervous systems may be sensitive to temporally ordered events at many spatial and temporal scales. We suggest here the existence of a new class of learning rule, called a predictive Hebbian learning rule, that is sensitive to the temporal ordering of synaptic inputs. We show how this predictive learning rule could act at single synaptic connections and through diffuse neuromodulatory systems.


eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Salvador Dura-Bernal ◽  
Benjamin A Suter ◽  
Padraig Gleeson ◽  
Matteo Cantarelli ◽  
Adrian Quintana ◽  
...  

Biophysical modeling of neuronal networks helps to integrate and interpret rapidly growing and disparate experimental datasets at multiple scales. The NetPyNE tool (www.netpyne.org) provides both programmatic and graphical interfaces to develop data-driven multiscale network models in NEURON. NetPyNE clearly separates model parameters from implementation code. Users provide specifications at a high level via a standardized declarative language, for example connectivity rules, to create millions of cell-to-cell connections. NetPyNE then enables users to generate the NEURON network, run efficiently parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis – connectivity matrices, voltage traces, spike raster plots, local field potentials, and information theoretic measures. NetPyNE also facilitates model sharing by exporting and importing standardized formats (NeuroML and SONATA). NetPyNE is already being used to teach computational neuroscience students and by modelers to investigate brain regions and phenomena.


1998 ◽  
Vol 08 (02) ◽  
pp. 315-327
Author(s):  
HOWARD C. CARD ◽  
DEAN K. McNEILL ◽  
CHRISTIAN R. SCHNEIDER ◽  
ROLAND S. SCHNEIDER ◽  
BRION K. DOLENKO

An investigation is made of the tolerance of various in-circuit learning algorithms to component imprecision and other circuit limitations in artificial neural networks. In contrast with most previous work, the various circuit limitations are treated separately for their effects on learning. Supervised learning mechanisms including backpropagation and contrastive Hebbian learning, and unsupervised soft competitive learning were found to be sufficiently tolerant of those levels of arithmetic inaccuracy, noise, nonlinearity, weight decay, and statistical variation from fabrication that we have experienced in 1.2 μm analog CMOS circuits employing Gilbert multipliers as the primary computational element. These learning circuits also function properly in the presence of offset errors in analog multipliers and adders, provided that the computed weight updates are constrained by the circuitry to be made only when they exceed certain minimum or threshold values. These results may also be relevant for other analog circuit approaches and for compact (low bit rate) digital implementations, although in this case, the minimum weight increment defined by the bit precision could necessitate stochastic updating.


2004 ◽  
Vol 41 (04) ◽  
pp. 1237-1242 ◽  
Author(s):  
Offer Kella ◽  
Wolfgang Stadje

We consider a Brownian motion with time-reversible Markov-modulated speed and two reflecting barriers. A methodology depending on a certain multidimensional martingale together with some linear algebra is applied in order to explicitly compute the stationary distribution of the joint process of the content level and the state of the underlying Markov chain. It is shown that the stationary distribution is such that the two quantities are independent. The long-run average push at the two barriers at each of the states is also computed.


1992 ◽  
Vol 29 (4) ◽  
pp. 781-791 ◽  
Author(s):  
Masaaki Kijima

Let N(t) be an exponentially ergodic birth-death process on the state space {0, 1, 2, ···} governed by the parameters {λn, μn}, where µ0 = 0, such that λn = λ and μn = μ for all n ≧ N, N ≧ 1, with λ < μ. In this paper, we develop an algorithm to determine the decay parameter of such a specialized exponentially ergodic birth-death process, based on van Doorn's representation (1987) of eigenvalues of sign-symmetric tridiagonal matrices. The decay parameter is important since it is indicative of the speed of convergence to ergodicity. Some comparability results for the decay parameters are given, followed by the discussion for the decay parameter of a birth-death process governed by the parameters such that limn→∞λn = λ and limn→∞µn = μ. The algorithm is also shown to be a useful tool to determine the quasi-stationary distribution, i.e. the limiting distribution conditioned to stay in {1, 2, ···}, of such specialized birth-death processes.


Sign in / Sign up

Export Citation Format

Share Document