scholarly journals Modelling the neural code in large populations of correlated neurons

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Sacha Sokoloski ◽  
Amir Aschner ◽  
Ruben Coen-Cagli

Neurons respond selectively to stimuli, and thereby define a code that associates stimuli with population response patterns. Certain correlations within population responses (noise correlations) significantly impact the information content of the code, especially in large populations. Understanding the neural code thus necessitates response models that quantify the coding properties of modelled populations, while fitting large-scale neural recordings and capturing noise correlations. In this paper we propose a class of response model based on mixture models and exponential families. We show how to fit our models with expectation-maximization, and that they capture diverse variability and covariability in recordings of macaque primary visual cortex. We also show how they facilitate accurate Bayesian decoding, provide a closed-form expression for the Fisher information, and are compatible with theories of probabilistic population coding. Our framework could allow researchers to quantitatively validate the predictions of neural coding theories against both large-scale neural recordings and cognitive performance.

2020 ◽  
Author(s):  
Sacha Sokoloski ◽  
Amir Aschner ◽  
Ruben Coen-Cagli

AbstractThe activity of a neural population encodes information about the stimulus that caused it, and decoding population activity reveals how neural circuits process that information. Correlations between neurons strongly impact both encoding and decoding, yet we still lack models that simultaneously capture stimulus encoding by large populations of correlated neurons and allow for accurate decoding of stimulus information, thus limiting our quantitative understanding of the neural code. To address this, we propose a class of models of large-scale population activity based on the theory of exponential family distributions. We apply our models to macaque primary visual cortex (V1) recordings, and show they capture a wide range of response statistics, facilitate accurate Bayesian decoding, and provide interpretable representations of fundamental properties of the neural code. Ultimately, our framework could allow researchers to quantitatively validate predictions of theories of neural coding against both large-scale response recordings and cognitive performance.


2008 ◽  
Vol 20 (1) ◽  
pp. 146-175 ◽  
Author(s):  
F. Klam ◽  
R. S. Zemel ◽  
A. Pouget

The codes obtained from the responses of large populations of neurons are known as population codes. Several studies have shown that the amount of information conveyed by such codes, and the format of this information, is highly dependent on the pattern of correlations. However, very little is known about the impact of response correlations (as found in actual cortical circuits) on neural coding. To address this problem, we investigated the properties of population codes obtained from motion energy filters, which provide one of the best models for motion selectivity in early visual areas. It is therefore likely that the correlations that arise among energy filters also arise among motion-selective neurons. We adopted an ideal observer approach to analyze filter responses to three sets of images: noisy sine gratings, random dots kinematograms, and images of natural scenes. We report that in our model, the structure of the population code varies with the type of image. We also show that for all sets of images, correlations convey a large fraction of the information: 40% to 90% of the total information. Moreover, ignoring those correlations when decoding leads to considerable information loss—from 50% to 93%, depending on the image type. Finally we show that it is important to consider a large population of motion energy filters in order to see the impact of correlations. Study of pairs of neurons, as is often done experimentally, can underestimate the effect of correlations.


2021 ◽  
Author(s):  
Charles R Heller ◽  
Stephen V David

Rapidly developing technology for large scale neural recordings has allowed researchers to measure the activity of hundreds to thousands of neurons at single cell resolution in vivo. Neural decoding analyses are a widely used tool used for investigating what information is represented in this complex, high-dimensional neural population activity. Most population decoding methods assume that correlated activity between neurons has been estimated accurately. In practice, this requires large amounts of data, both across observations and across neurons. Unfortunately, most experiments are fundamentally constrained by practical variables that limit the number of times the neural population can be observed under a single stimulus and/or behavior condition. Therefore, new analytical tools are required to study neural population coding while taking into account these limitations. Here, we present a simple and interpretable method for dimensionality reduction that allows neural decoding metrics to be calculated reliably, even when experimental trial numbers are limited. We illustrate the method using simulations and compare its performance to standard approaches for dimensionality reduction and decoding by applying it to single-unit electrophysiological data collected from auditory cortex.


1969 ◽  
Vol 08 (01) ◽  
pp. 07-11 ◽  
Author(s):  
H. B. Newcombe

Methods are described for deriving personal and family histories of birth, marriage, procreation, ill health and death, for large populations, from existing civil registrations of vital events and the routine records of ill health. Computers have been used to group together and »link« the separately derived records pertaining to successive events in the lives of the same individuals and families, rapidly and on a large scale. Most of the records employed are already available as machine readable punchcards and magnetic tapes, for statistical and administrative purposes, and only minor modifications have been made to the manner in which these are produced.As applied to the population of the Canadian province of British Columbia (currently about 2 million people) these methods have already yielded substantial information on the risks of disease: a) in the population, b) in relation to various parental characteristics, and c) as correlated with previous occurrences in the family histories.


2021 ◽  
Vol 70 ◽  
pp. 64-73
Author(s):  
Cole Hurwitz ◽  
Nina Kudryashova ◽  
Arno Onken ◽  
Matthias H. Hennig

2012 ◽  
Vol 4 (4) ◽  
pp. 475-504 ◽  
Author(s):  
Lindsey N. Kingston ◽  
Saheli Datta

Norms of global responsibility have changed significantly since the 1948 Universal Declaration of Human Rights (UDHR), and today’s international community critically considers responsibilities within and beyond state borders, as evidenced by the adoption of the Responsibility to Protect (R2P) doctrine. From this starting point, protection must be extended to large populations susceptible to structural violence – social harms resulting from the pervasive and persistent impact of economic, political and cultural violence in societies. In order to show the potential of expanded conceptions of global responsibility, this article proceeds as follows: First, a discussion of the evolving concepts of responsibility outlines a shift in thinking about sovereignty that creates a multilayered system of responsibility. This section defines key concepts and highlights an ‘unbundled R2P’ framework for approaching structural violence. Second, an overview of two vulnerable populations – internally displaced persons (IDPs) and the stateless – illustrates that large-scale cases of state abuse and neglect are not limited to acts of physical violence, and that pervasive structural violence requires further attention from the international community. Lastly, recommendations are provided for expanding the scope of global responsibility in order to assist the internally displaced and the stateless. These recommendations address who is responsible, when global responsibility is warranted, and how such responsibility should be implemented.


2018 ◽  
Vol 5 (3) ◽  
pp. 172265 ◽  
Author(s):  
Alexis R. Hernández ◽  
Carlos Gracia-Lázaro ◽  
Edgardo Brigatti ◽  
Yamir Moreno

We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.


2016 ◽  
Author(s):  
George Dimitriadis ◽  
Joana Neto ◽  
Adam R. Kampff

AbstractElectrophysiology is entering the era of ‘Big Data’. Multiple probes, each with hundreds to thousands of individual electrodes, are now capable of simultaneously recording from many brain regions. The major challenge confronting these new technologies is transforming the raw data into physiologically meaningful signals, i.e. single unit spikes. Sorting the spike events of individual neurons from a spatiotemporally dense sampling of the extracellular electric field is a problem that has attracted much attention [22, 23], but is still far from solved. Current methods still rely on human input and thus become unfeasible as the size of the data sets grow exponentially.Here we introduce the t-student stochastic neighbor embedding (t-sne) dimensionality reduction method [27] as a visualization tool in the spike sorting process. T-sne embeds the n-dimensional extracellular spikes (n = number of features by which each spike is decomposed) into a low (usually two) dimensional space. We show that such embeddings, even starting from different feature spaces, form obvious clusters of spikes that can be easily visualized and manually delineated with a high degree of precision. We propose that these clusters represent single units and test this assertion by applying our algorithm on labeled data sets both from hybrid [23] and paired juxtacellular/extracellular recordings [15]. We have released a graphical user interface (gui) written in python as a tool for the manual clustering of the t-sne embedded spikes and as a tool for an informed overview and fast manual curration of results from other clustering algorithms. Furthermore, the generated visualizations offer evidence in favor of the use of probes with higher density and smaller electrodes. They also graphically demonstrate the diverse nature of the sorting problem when spikes are recorded with different methods and arise from regions with different background spiking statistics.


1982 ◽  
Vol 34 (2) ◽  
pp. 374-405 ◽  
Author(s):  
Ethan Akin

A symmetric game consists of a set of pure strategies indexed by {0, …, n} and a real payoff matrix (aij). When two players choose strategies i and j the payoffs are aij and aji to the i-player and j-player respectively. In classical game theory of Von Neumann and Morgenstern [16] the payoffs are measured in units of utility, i.e., desirability, or in units of some desirable good, e.g. money. The problem of game theory is that of a rational player who seeks to choose a strategy or mixture of strategies which will maximize his return. In evolutionary game theory of Maynard Smith and Price [13] we look at large populations of game players. Each player's opponents are selected randomly from the population, and no information about the opponent is available to the player. For each one the choice of strategy is a fixed inherited characteristic.


2020 ◽  
Vol 8 (1) ◽  
pp. 89-119
Author(s):  
Nathalie Vissers ◽  
Pieter Moors ◽  
Dominique Genin ◽  
Johan Wagemans

Artistic photography is an interesting, but often overlooked, medium within the field of empirical aesthetics. Grounded in an art–science collaboration with art photographer Dominique Genin, this project focused on the relationship between the complexity of a photograph and its aesthetic appeal (beauty, pleasantness, interest). An artistic series of 24 semi-abstract photographs that play with multiple layers, recognisability vs unrecognizability and complexity was specifically created and selected for the project. A large-scale online study with a broad range of individuals (n = 453, varying in age, gender and art expertise) was set up. Exploratory data-driven analyses revealed two clusters of individuals, who responded differently to the photographs. Despite the semi-abstract nature of the photographs, differences seemed to be driven more consistently by the ‘content’ of the photograph than by its complexity levels. No consistent differences were found between clusters in age, gender or art expertise. Together, these results highlight the importance of exploratory, data-driven work in empirical aesthetics to complement and nuance findings from hypotheses-driven studies, as they allow to go further than a priori assumptions, to explore underlying clusters of participants with different response patterns, and to point towards new venues for future research. Data and code for the analyses reported in this article can be found at https://osf.io/2fws6/.


Sign in / Sign up

Export Citation Format

Share Document