scholarly journals Embedding optimization reveals long-lasting history dependence in neural spiking activity

2021 ◽  
Vol 17 (6) ◽  
pp. e1008927
Author(s):  
Lucas Rudelt ◽  
Daniel González Marx ◽  
Michael Wibral ◽  
Viola Priesemann

Information processing can leave distinct footprints on the statistics of neural spiking. For example, efficient coding minimizes the statistical dependencies on the spiking history, while temporal integration of information may require the maintenance of information over different timescales. To investigate these footprints, we developed a novel approach to quantify history dependence within the spiking of a single neuron, using the mutual information between the entire past and current spiking. This measure captures how much past information is necessary to predict current spiking. In contrast, classical time-lagged measures of temporal dependence like the autocorrelation capture how long—potentially redundant—past information can still be read out. Strikingly, we find for model neurons that our method disentangles the strength and timescale of history dependence, whereas the two are mixed in classical approaches. When applying the method to experimental data, which are necessarily of limited size, a reliable estimation of mutual information is only possible for a coarse temporal binning of past spiking, a so-called past embedding. To still account for the vastly different spiking statistics and potentially long history dependence of living neurons, we developed an embedding-optimization approach that does not only vary the number and size, but also an exponential stretching of past bins. For extra-cellular spike recordings, we found that the strength and timescale of history dependence indeed can vary independently across experimental preparations. While hippocampus indicated strong and long history dependence, in visual cortex it was weak and short, while in vitro the history dependence was strong but short. This work enables an information-theoretic characterization of history dependence in recorded spike trains, which captures a footprint of information processing that is beyond time-lagged measures of temporal dependence. To facilitate the application of the method, we provide practical guidelines and a toolbox.

2020 ◽  
Author(s):  
Lucas Rudelt ◽  
Daniel González Marx ◽  
Michael Wibral ◽  
Viola Priesemann

AbstractInformation processing can leave distinct footprints on the statistical history dependence in single neuron spiking. Statistical history dependence can be quantified using information theory, but its estimation from experimental recordings is only possible for a reduced representation of past spiking, a so called past embedding. Here, we present a novel embedding-optimization approach that optimizes temporal binning of past spiking to capture most history dependence, while a reliable estimation is ensured by regularization. The approach does not only quantify non-linear and higher-order dependencies, but also provides an estimate of the temporal depth that history dependence reaches into the past. We benchmarked the approach on simulated spike recordings of a leaky integrate-and-fire neuron with long lasting spike-frequency-adaptation, where it accurately estimated history dependence over hundreds of milliseconds. In a diversity of extra-cellular spike recordings, including highly parallel recordings using a Neuropixel probe, we found some neurons with surprisingly strong history dependence, which could last up to seconds. Both aspects, the magnitude and the temporal depth of history dependence, showed interesting differences between recorded systems, which points at systematic differences in information processing between these systems. We provide practical guidelines in this paper and a toolbox for Python3 at https://github.com/Priesemann-Group/hdestimator for readers interested in applying the method to their data.Author summaryEven with exciting advances in recording techniques of neural spiking activity, experiments only provide a comparably short glimpse into the activity of only a tiny subset of all neurons. How can we learn from these experiments about the organization of information processing in the brain? To that end, we exploit that different properties of information processing leave distinct footprints on the firing statistics of individual spiking neurons. In our work, we focus on a particular statistical footprint: How much does a single neuron’s spiking depend on its own preceding activity, which we call history dependence. By quantifying history dependence in neural spike recordings, one can, in turn, infer some of the properties of information processing. Because recording lengths are limited in practice, a direct estimation of history dependence from experiments is challenging. The embedding optimization approach that we present in this paper aims at extracting a maximum of history dependence within the limits set by a reliable estimation. The approach is highly adaptive and thereby enables a meaningful comparison of history dependence between neurons with vastly different spiking statistics, which we exemplify on a diversity of spike recordings. In conjunction with recent, highly parallel spike recording techniques, the approach could yield valuable insights on how hierarchical processing is organized in the brain.


2020 ◽  
Vol 4 (3) ◽  
pp. 678-697
Author(s):  
Samantha P. Sherrill ◽  
Nicholas M. Timme ◽  
John M. Beggs ◽  
Ehren L. Newman

Neural information processing is widely understood to depend on correlations in neuronal activity. However, whether correlation is favorable or not is contentious. Here, we sought to determine how correlated activity and information processing are related in cortical circuits. Using recordings of hundreds of spiking neurons in organotypic cultures of mouse neocortex, we asked whether mutual information between neurons that feed into a common third neuron increased synergistic information processing by the receiving neuron. We found that mutual information and synergistic processing were positively related at synaptic timescales (0.05–14 ms), where mutual information values were low. This effect was mediated by the increase in information transmission—of which synergistic processing is a component—that resulted as mutual information grew. However, at extrasynaptic windows (up to 3,000 ms), where mutual information values were high, the relationship between mutual information and synergistic processing became negative. In this regime, greater mutual information resulted in a disproportionate increase in redundancy relative to information transmission. These results indicate that the emergence of synergistic processing from correlated activity differs according to timescale and correlation regime. In a low-correlation regime, synergistic processing increases with greater correlation, and in a high-correlation regime, synergistic processing decreases with greater correlation.


2019 ◽  
Author(s):  
Samantha P. Sherrill ◽  
Nicholas M. Timme ◽  
John M. Beggs ◽  
Ehren L. Newman

ABSTRACTNeural information processing is widely understood to depend on correlations in neuronal activity. However, whether correlation is favorable or not is contentious. Here, we sought to determine how correlated activity and information processing are related in cortical circuits. Using recordings of hundreds of spiking neurons in organotypic cultures of mouse neocortex, we asked whether mutual information between neurons that feed into a common third neuron increased synergistic information processing by the receiving neuron. We found that mutual information and synergistic processing were positively related at synaptic timescales (0.05-14 ms), where mutual information values were low. This effect was mediated by the increase in information transmission—of which synergistic processing is a component—that resulted as mutual information grew. However, at extrasynaptic windows (up to 3000 ms), where mutual information values were high, the relationship between mutual information and synergistic processing became negative. In this regime, greater mutual information resulted in a disproportionate increase in redundancy relative to information transmission. These results indicate that the emergence of synergistic processing from correlated activity differs according to timescale and correlation regime. In a low-correlation regime, synergistic processing increases with greater correlation, and in a high correlation regime, synergistic processing decreases with greater correlation.AUTHOR SUMMARYIn the present work, we address the question of whether correlated activity in functional networks of cortical circuits supports neural computation. To do so, we combined network analysis with information theoretic tools to analyze the spiking activity of hundreds of neurons recorded from organotypic cultures of mouse somatosensory cortex. We found that, at timescales most relevant to direct neuronal communication, neurons with more correlated activity predicted greater computation, suggesting that correlated activity does support computation in cortical circuits. Importantly, this result reversed at timescales less relevant to direct neuronal communication, where even greater correlated activity predicted decreased computation. Thus, the relationship between correlated activity and computation depends on the timescale and the degree of correlation in neuronal interactions.


2001 ◽  
Vol 85 (5) ◽  
pp. 2213-2223 ◽  
Author(s):  
Mark W. Doyle ◽  
Michael C. Andresen

The timing of events within the nervous system is a critical feature of signal processing and integration. In neurotransmission, the synaptic latency, the time between stimulus delivery and appearance of the synaptic event, is generally thought to be directly related to the complexity of that pathway. In horizontal brain stem slices, we examined synaptic latency and its shock-to-shock variability (synaptic jitter) in medial nucleus tractus solitarius (NTS) neurons in response to solitary tract (ST) electrical activation. Using a visualized patch recording approach, we activated ST 1–3 mm from the recorded neuron with short trains (50–200 Hz) and measured synaptic currents under voltage clamp. Latencies ranged from 1.5 to 8.6 ms, and jitter values (SD of intraneuronal latency) ranged from 26 to 764 μs ( n = 49). Surprisingly, frequency of synaptic failure was not correlated with either latency or jitter ( P > 0.147; n = 49). Despite conventional expectations, no clear divisions in latency were found from the earliest arriving excitatory postsynaptic currents (EPSCs) to late pharmacologically polysynaptic responses. Shortest latency EPSCs (<3 ms) were mediated by non– N-methyl-d-aspartate (non-NMDA) glutamate receptors. Longer latency responses were a mix of excitatory and inhibitory currents including non-NMDA EPSCs and GABAa receptor–mediated currents (IPSC). All synaptic responses exhibited prominent frequency-dependent depression. In a subset of neurons, we labeled sensory boutons by the anterograde fluorescent tracer, DiA, from aortic nerve baroreceptors and then recorded from anatomically identified second-order neurons. In identified second-order NTS neurons, ST activation evoked EPSCs with short to moderate latency (1.9–4.8 ms) but uniformly minimal jitter (31 to 61 μs) that were mediated by non-NMDA receptors but had failure rates as high as 39%. These monosynaptic EPSCs in identified second-order neurons were significantly different in latency and jitter than GABAergic IPSCs (latency, 2.95 ± 0.71 vs. 5.56 ± 0.74 ms, mean ± SE, P = 0.027; jitter, 42.3 ± 6.5 vs. 416.3 ± 94.4 μs, P = 0.013, n = 4, 6, respectively), but failure rates were similar (27.8 ± 9.0 vs. 9.7 ± 4.4%, P = 0.08, respectively). Such results suggest that jitter and not absolute latency or failure rate is the most reliable discriminator of mono- versus polysynaptic pathways. The results suggest that brain stem sensory pathways may differ in their principles of integration compared with cortical models and that this importantly impacts synaptic performance. The unique performance properties of the sensory-NTS pathway may reflect stronger axosomatic synaptic processing in brain stem compared with dendritically weighted models typical in cortical structures and thus may reflect very different strategies of spatio-temporal integration in this NTS region and for autonomic regulation.


2022 ◽  
Author(s):  
Xuan Yan ◽  
Niccolo Calcini ◽  
Payam Safavi ◽  
Asli Ak ◽  
Koen Kole ◽  
...  

Background: The recent release of two large intracellular electrophysiological databases now allows high-dimensional systematic analysis of mechanisms of information processing in the neocortex. Here, to complement these efforts, we introduce a freely and publicly available database that provides a comparative insight into the role of various neuromodulatory transmitters in controlling neural information processing. Findings: A database of in vitro whole-cell patch-clamp recordings from primary somatosensory and motor cortices (layers 2/3) of the adult mice (2-15 months old) from both sexes is introduced. A total of 464 current-clamp experiments from identified excitatory and inhibitory neurons are provided. Experiments include recordings with (i) Step-and-Hold protocol during which the current was transiently held at 10 steps, gradually increasing in amplitude, (ii) 'Frozen Noise' injections that model the amplitude and time-varying nature of synaptic inputs to a neuron in biological networks. All experiments follow a within neuron across drug design which includes a vehicle control and a modulation of one of the following targets in the same neuron: dopamine and its receptors D1R, D2R, serotonin 5HT1f receptor, norepinephrine Alpha1, and acetylcholine M1 receptors. Conclusions: This dataset is the first to provide a systematic and comparative insight into the role of the selected neuromodulators in controlling cellular excitability. The data will help to mechanistically address how bottom-up information processing can be modulated, providing a reference for studying neural coding characteristics and revealing the contribution of neuromodulation to information processing.  


2021 ◽  
Vol 14 ◽  
Author(s):  
Timothy Bellay ◽  
Woodrow L. Shew ◽  
Shan Yu ◽  
Jessica J. Falco-Walter ◽  
Dietmar Plenz

Neuronal avalanches are scale-invariant neuronal population activity patterns in the cortex that emerge in vivo in the awake state and in vitro during balanced excitation and inhibition. Theory and experiments suggest that avalanches indicate a state of cortex that improves numerous aspects of information processing by allowing for the transient and selective formation of local as well as system-wide spanning neuronal groups. If avalanches are indeed involved with information processing, one might expect that single neurons would participate in avalanche patterns selectively. Alternatively, all neurons could participate proportionally to their own activity in each avalanche as would be expected for a population rate code. Distinguishing these hypotheses, however, has been difficult as robust avalanche analysis requires technically challenging measures of their intricate organization in space and time at the population level, while also recording sub- or suprathreshold activity from individual neurons with high temporal resolution. Here, we identify repeated avalanches in the ongoing local field potential (LFP) measured with high-density microelectrode arrays in the cortex of awake nonhuman primates and in acute cortex slices from young and adult rats. We studied extracellular unit firing in vivo and intracellular responses of pyramidal neurons in vitro. We found that single neurons participate selectively in specific LFP-based avalanche patterns. Furthermore, we show in vitro that manipulating the balance of excitation and inhibition abolishes this selectivity. Our results support the view that avalanches represent the selective, scale-invariant formation of neuronal groups in line with the idea of Hebbian cell assemblies underlying cortical information processing.


2021 ◽  
Author(s):  
Sophie Girardin ◽  
Blandine Clément ◽  
Stephan J. Ihle ◽  
Sean Weaver ◽  
Jana B. Petr ◽  
...  

Bottom-up neuroscience, which consists of building and studying controlled networks of neurons in vitro, is a promising method to investigate information processing at the neuronal level. However, in vitro studies tend to use cells of animal origin rather than human neurons, leading to conclusions that might not be generalizable to humans and limiting the possibilities for relevant studies on neurological disorders. Here we present a method to build arrays of topologically controlled circuits of human induced pluripotent stem cell (iPSC)-derived neurons. The circuits consist of 4 to 50 neurons with mostly unidirectional connections, confined by microfabricated polydimethylsiloxane (PDMS) membranes. Such circuits were characterized using optical imaging and microelectrode arrays (MEAs). Electrophysiology recordings were performed on circuits of human iPSC-derived neurons for at least 4.5 months. We believe that the capacity to build small and controlled circuits of human iPSC-derived neurons holds great promise to better understand the fundamental principles of information processing and storing in the brain.


2019 ◽  
Vol 3 (5) ◽  
pp. 517-527 ◽  
Author(s):  
Oliver R. Maguire ◽  
Wilhelm T.S. Huck

The goal of creating a synthetic cell necessitates the development of reaction networks which will underlie all of its behaviours. Recent developments in in vitro systems, based upon both DNA and enzymes, have created networks capable of a range of behaviours e.g. information processing, adaptation and diffusive signalling. These networks are based upon reaction motifs that when combined together produce more complex behaviour. We highlight why it is inevitable that networks, based on enzymes or enzyme-like catalysts, will be required for the construction of a synthetic cell. We outline several of the challenges, including (a) timing, (b) regulation and (c) energy distribution, that must be overcome in order to transition from the simple networks we have today to much more complex networks capable of a variety of behaviours and which could find application one day within a synthetic cell.


1997 ◽  
Vol 20 (1) ◽  
pp. 155-156 ◽  
Author(s):  
Ernst Pöppel

States of being conscious (S) can be defined on the basis of temporal information processing. A high-frequency mechanism provides atemporal system states with periods of approximately 30 msec to implement the functional connection of distributed activities allowing the construction of primordial events; a low frequency mechanism characterized by automatic temporal integration sets up temporal windows with approximately 3 seconds duration. This integration mechanism can be used to define S. P-consciousness and A-consciousness as conceived of by Block can be mapped onto these neuronal mechanisms.


Sign in / Sign up

Export Citation Format

Share Document