scholarly journals Robust coding with spiking networks: a geometric perspective

2020 ◽  
Author(s):  
Nuno Calaim ◽  
Florian Alexander Dehmelt ◽  
Pedro J. Gonçalves ◽  
Christian K. Machens

AbstractThe interactions of large groups of spiking neurons have been difficult to understand or visualise. Using simple geometric pictures, we here illustrate the spike-by-spike dynamics of networks based on efficient spike coding, and we highlight the conditions under which they can preserve their function against various perturbations. We show that their dynamics are confined to a small geometric object, a ‘convex polytope’, in an abstract error space. Changes in network parameters (such as number of neurons, dimensionality of the inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this polytope. Using these insights, we show that the core functionality of these network models, just like their biological counterparts, is preserved as long as perturbations do not destroy the shape of the geometric object. We suggest that this single principle—efficient spike coding—may be key to understanding the robustness of neural systems at the circuit level.

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 155
Author(s):  
Bruno Cessac ◽  
Ignacio Ampuero ◽  
Rodrigo Cofré

We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allow us to predict the influence of a weak amplitude time dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how the linear response is explicitly related to the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike train statistics. We illustrate our results with numerical simulations performed over a discrete time integrate and fire model.


2021 ◽  
Author(s):  
Yusi Chen ◽  
Burke Q Rosen ◽  
Terrence J Sejnowski

Investigating causal neural interactions are essential to understanding sub- sequent behaviors. Many statistical methods have been used for analyzing neural activity, but efficiently and correctly estimating the direction of net- work interactions remains difficult. Here, we derive dynamical differential covariance (DDC), a new method based on dynamical network models that detects directional interactions with low bias and high noise tolerance with- out the stationary assumption. The method is first validated on networks with false positive motifs and multiscale neural simulations where the ground truth connectivity is known. Then, applying DDC to recordings of resting-state functional magnetic resonance imaging (rs-fMRI) from over 1,000 individual subjects, DDC consistently detected regional interactions with strong structural connectivity. DDC can be generalized to a wide range of dynamical models and recording techniques.


2010 ◽  
Vol 2010 ◽  
pp. 1-6 ◽  
Author(s):  
Richard Stafford

Biological organisms do not evolve to perfection, but to out compete others in their ecological niche, and therefore survive and reproduce. This paper reviews the constraints imposed on imperfect organisms, particularly on their neural systems and ability to capture and process information accurately. By understanding biological constraints of the physical properties of neurons, simpler and more efficient artificial neural networks can be made (e.g., spiking networks will transmit less information than graded potential networks, spikes only occur in nature due to limitations of carrying electrical charges over large distances). Furthermore, understanding the behavioural and ecological constraints on animals allows an understanding of the limitations of bio-inspired solutions, but also an understanding of why bio-inspired solutions may fail and how to correct these failures.


2012 ◽  
Vol 24 (7) ◽  
pp. 1669-1694 ◽  
Author(s):  
Emre Ozgur Neftci ◽  
Bryan Toth ◽  
Giacomo Indiveri ◽  
Henry D. I. Abarbanel

Neuroscientists often propose detailed computational models to probe the properties of the neural systems they study. With the advent of neuromorphic engineering, there is an increasing number of hardware electronic analogs of biological neural systems being proposed as well. However, for both biological and hardware systems, it is often difficult to estimate the parameters of the model so that they are meaningful to the experimental system under study, especially when these models involve a large number of states and parameters that cannot be simultaneously measured. We have developed a procedure to solve this problem in the context of interacting neural populations using a recently developed dynamic state and parameter estimation (DSPE) technique. This technique uses synchronization as a tool for dynamically coupling experimentally measured data to its corresponding model to determine its parameters and internal state variables. Typically experimental data are obtained from the biological neural system and the model is simulated in software; here we show that this technique is also efficient in validating proposed network models for neuromorphic spike-based very large-scale integration (VLSI) chips and that it is able to systematically extract network parameters such as synaptic weights, time constants, and other variables that are not accessible by direct observation. Our results suggest that this method can become a very useful tool for model-based identification and configuration of neuromorphic multichip VLSI systems.


2019 ◽  
Author(s):  
Michael Kleinman ◽  
Chandramouli Chandrasekaran ◽  
Jonathan C. Kao

AbstractCognition emerges from coordinated computations across multiple brain areas. However, elucidating these computations within and across brain regions is challenging because intra- and inter-area connectivity are typically unknown. To study coordinated computation, we trained multi-area recurrent neural networks (RNNs) to discriminate the dominant color of a checker-board and output decision variables reflecting a direction decision, a task previously used to investigate decision-related dynamics in dorsal premotor cortex (PMd) of monkeys. We found that multi-area RNNs, trained with neurophysiological connectivity constraints and Dale’s law, recapitulated decision-related dynamics observed in PMd. The RNN solved this task by a dynamical mechanism where the direction decision was computed and outputted, via precisely oriented dynamics, on an axis that was nearly orthogonal to checkerboard color inputs. This orthogonal direction information was preferentially propagated through alignment with inter-area connections; in contrast, color information was filtered. These results suggest that cortex uses modular computation to generate minimal sufficient representations of task information. Finally, we used multi-area RNNs to produce experimentally testable hypotheses for computations that occur within and across multiple brain areas, enabling new insights into distributed computation in neural systems.


2006 ◽  
Vol 16 (01) ◽  
pp. 19-37 ◽  
Author(s):  
SELIM G. AKL

A new computational paradigm is described which offers the possibility of superlinear (and sometimes unbounded) speedup, when parallel computation is used. The computations involved are subject only to given mathematical constraints and hence do not depend on external circumstances to achieve superlinear performance. The focus here is on geometric transformations. Given a geometric object A with some property, it is required to transform A into another object B which enjoys the same property. If the transformation requires several steps, each resulting in an intermediate object, then each of these intermediate objects must also obey the same property. We show that in transforming one triangulation of a polygon into another, a parallel algorithm achieves a superlinear speedup. In the case where a convex decomposition of a set of points is to be transformed, the improvement in performance is unbounded, meaning that a parallel algorithm succeeds in solving the problem as posed, while all sequential algorithms fail.


2021 ◽  
pp. 1-19
Author(s):  
Akke Mats Houben

Abstract Neurons are connected to other neurons by axons and dendrites that conduct signals with finite velocities, resulting in delays between the firing of a neuron and the arrival of the resultant impulse at other neurons. Since delays greatly complicate the analytical treatment and interpretation of models, they are usually neglected or taken to be uniform, leading to a lack in the comprehension of the effects of delays in neural systems. This letter shows that heterogeneous transmission delays make small groups of neurons respond selectively to inputs with differing frequency spectra. By studying a single integrate-and-fire neuron receiving correlated time-shifted inputs, it is shown how the frequency response is linked to both the strengths and delay times of the afferent connections. The results show that incorporating delays alters the functioning of neural networks, and changes the effect that neural connections and synaptic strengths have.


Author(s):  
Jason Yust

The associahedron is a geometric object, a multidimensional solid, whose vertices can be understood to represent the possible temporal structures on a given number of events. We can use it to relate temporal structures, such as two temporal structures on the same events in different modalities. Comparison of disjunct tonal and formal structures can be understood, for instance, as structural appoggiaturas or anticipations depending on which direction they point within the associahedron. This is illustrated with a number of analytical examples from previous chapters. A more symmetrical solid, the permutohedron, is embedded within the associahedron, and distance from the center of the permutohedron can be used as a measure of evenness for a temporal structure.


2018 ◽  
Vol 2020 (7) ◽  
pp. 1992-2006
Author(s):  
Michael Gene Dobbins ◽  
Heuna Kim ◽  
Luis Montejano ◽  
Edgardo Roldán-Pensado

Abstract A shadow of a geometric object A in a given direction v is the orthogonal projection of A on the hyperplane orthogonal to v. We show that any topological embedding of a circle into Euclidean d-space can have at most two shadows that are simple paths in linearly independent directions. The proof is topological and uses an analog of basic properties of degree of maps on a circle to relations on a circle. This extends a previous result that dealt with the case d = 3.


Sign in / Sign up

Export Citation Format

Share Document