Fidelity of the Ensemble Code for Visual Motion in Primate Retina

2005 ◽  
Vol 94 (1) ◽  
pp. 119-135 ◽  
Author(s):  
E. S. Frechette ◽  
A. Sher ◽  
M. I. Grivich ◽  
D. Petrusca ◽  
A. M. Litke ◽  
...  

Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of ∼100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of ∼1%. The elementary motion signal was conveyed in ∼10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.

2018 ◽  
Vol 115 (13) ◽  
pp. 3267-3272 ◽  
Author(s):  
Christophe Gardella ◽  
Olivier Marre ◽  
Thierry Mora

The brain has no direct access to physical stimuli but only to the spiking activity evoked in sensory organs. It is unclear how the brain can learn representations of the stimuli based on those noisy, correlated responses alone. Here we show how to build an accurate distance map of responses solely from the structure of the population activity of retinal ganglion cells. We introduce the Temporal Restricted Boltzmann Machine to learn the spatiotemporal structure of the population activity and use this model to define a distance between spike trains. We show that this metric outperforms existing neural distances at discriminating pairs of stimuli that are barely distinguishable. The proposed method provides a generic and biologically plausible way to learn to associate similar stimuli based on their spiking responses, without any other knowledge of these stimuli.


eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Matthew D Golub ◽  
Byron M Yu ◽  
Steven M Chase

To successfully guide limb movements, the brain takes in sensory information about the limb, internally tracks the state of the limb, and produces appropriate motor commands. It is widely believed that this process uses an internal model, which describes our prior beliefs about how the limb responds to motor commands. Here, we leveraged a brain-machine interface (BMI) paradigm in rhesus monkeys and novel statistical analyses of neural population activity to gain insight into moment-by-moment internal model computations. We discovered that a mismatch between subjects’ internal models and the actual BMI explains roughly 65% of movement errors, as well as long-standing deficiencies in BMI speed control. We then used the internal models to characterize how the neural population activity changes during BMI learning. More broadly, this work provides an approach for interpreting neural population activity in the context of how prior beliefs guide the transformation of sensory input to motor output.


Author(s):  
Min Guo ◽  
Yinghua Yu ◽  
Jiajia Yang ◽  
Jinglong Wu

To perceive our world, we make full use of multiple sources of sensory information derived from different modalities which include five basic sensory systems; visual, auditory, tactile, olfactory, and gustatory. In the real world, we normally simultaneously acquire information from different sensory receptors. Therefore, multisensory integration in the brain plays an important role in performance and perception. This review focuses on the crossmodal between the visual and tactile. Many previous studies have indicated that visual information effects tactile perception and in return, tactile perception is also active in the MT, the main visual motion information processing area. However, few studies have explored how information of the crossmodal between the visual and tactile is processed. Here, the authors highlight the processing mechanism of the crossmodal in the brain. They show that integration between the visual and tactile has two stages: combination and integration.


2021 ◽  
Author(s):  
Jesús Pérez-Ortega ◽  
Joaquín Araya ◽  
Cristobal Ibaceta ◽  
Rubén Herzog ◽  
María-José Escobar ◽  
...  

AbstractEven though the retinal microcircuit organization has been described in detail at the single-cell level, little is known about how groups of retinal cells’ coordinated activity encode and process parallel information representing the spatial and temporal structure of changing environmental conditions. To describe the population dynamics of retinal neuronal ensembles, we used microelectrode array recordings that describe hundreds of retinal ganglion cells’ simultaneous activity in response to a short movie captured in the natural environment where our subject develops their visual behaviors. The vectorization of population activity allowed the identification of retinal neuronal ensembles that synchronize to specific segments of natural stimuli. These synchronous retinal neuronal ensembles were reliably activated by the same stimuli at different trials, indicating a robust population response of retinal microcircuits. The generation of asynchronous events required integrating a physiologically meaningful time window larger than 80 ms, demonstrating that retinal neuronal ensembles’ time integration filters non-structured visual information. Interestingly, individual neurons could be part of several ensembles indicating that parallel circuits could encode environmental conditions changes. We conclude that parallel neuronal ensembles could represent the functional unit of retinal computations and propose that the further study of retinal neuronal ensembles could reveal emergent properties of retinal circuits that individual cells’ activity cannot explain.


2018 ◽  
Vol 4 (1) ◽  
pp. 165-192 ◽  
Author(s):  
Wei Wei

Visual motion on the retina activates a cohort of retinal ganglion cells (RGCs). This population activity encodes multiple streams of information extracted by parallel retinal circuits. Motion processing in the retina is best studied in the direction-selective circuit. The main focus of this review is the neural basis of direction selectivity, which has been investigated in unprecedented detail using state-of-the-art functional, connectomic, and modeling methods. Mechanisms underlying the encoding of other motion features by broader RGC populations are also discussed. Recent discoveries at both single-cell and population levels highlight the dynamic and stimulus-dependent engagement of multiple mechanisms that collectively implement robust motion detection under diverse visual conditions.


2017 ◽  
Author(s):  
Christophe Gardella ◽  
Olivier Marre ◽  
Thierry Mora

The brain has no direct access to physical stimuli, but only to the spiking activity evoked in sensory organs. It is unclear how the brain can structure its representation of the world based on differences between those noisy, correlated responses alone. Here we show how to build a distance map of responses from the structure of the population activity of retinal ganglion cells, allowing for the accurate discrimination of distinct visual stimuli from the retinal response. We introduce the Temporal Restricted Boltzmann Machine to learn the spatiotemporal structure of the population activity, and use this model to define a distance between spike trains. We show that this metric outperforms existing neural distances at discriminating pairs of stimuli that are barely distinguishable. The proposed method provides a generic and biologically plausible way to learn to associate similar stimuli based on their spiking responses, without any other knowledge of these stimuli.


1999 ◽  
Vol 13 (2) ◽  
pp. 117-125 ◽  
Author(s):  
Laurence Casini ◽  
Françoise Macar ◽  
Marie-Hélène Giard

Abstract The experiment reported here was aimed at determining whether the level of brain activity can be related to performance in trained subjects. Two tasks were compared: a temporal and a linguistic task. An array of four letters appeared on a screen. In the temporal task, subjects had to decide whether the letters remained on the screen for a short or a long duration as learned in a practice phase. In the linguistic task, they had to determine whether the four letters could form a word or not (anagram task). These tasks allowed us to compare the level of brain activity obtained in correct and incorrect responses. The current density measures recorded over prefrontal areas showed a relationship between the performance and the level of activity in the temporal task only. The level of activity obtained with correct responses was lower than that obtained with incorrect responses. This suggests that a good temporal performance could be the result of an efficacious, but economic, information-processing mechanism in the brain. In addition, the absence of this relation in the anagram task results in the question of whether this relation is specific to the processing of sensory information only.


Author(s):  
Ann-Sophie Barwich

How much does stimulus input shape perception? The common-sense view is that our perceptions are representations of objects and their features and that the stimulus structures the perceptual object. The problem for this view concerns perceptual biases as responsible for distortions and the subjectivity of perceptual experience. These biases are increasingly studied as constitutive factors of brain processes in recent neuroscience. In neural network models the brain is said to cope with the plethora of sensory information by predicting stimulus regularities on the basis of previous experiences. Drawing on this development, this chapter analyses perceptions as processes. Looking at olfaction as a model system, it argues for the need to abandon a stimulus-centred perspective, where smells are thought of as stable percepts, computationally linked to external objects such as odorous molecules. Perception here is presented as a measure of changing signal ratios in an environment informed by expectancy effects from top-down processes.


2007 ◽  
Vol 46 (6) ◽  
pp. 742-756 ◽  
Author(s):  
Gyu Won Lee ◽  
Alan W. Seed ◽  
Isztar Zawadzki

Abstract The information on the time variability of drop size distributions (DSDs) as seen by a disdrometer is used to illustrate the structure of uncertainty in radar estimates of precipitation. Based on this, a method to generate the space–time variability of the distributions of the size of raindrops is developed. The model generates one moment of DSDs that is conditioned on another moment of DSDs; in particular, radar reflectivity Z is used to obtain rainfall rate R. Based on the fact that two moments of the DSDs are sufficient to capture most of the DSD variability, the model can be used to calculate DSDs and other moments of interest of the DSD. A deterministic component of the precipitation field is obtained from a fixed R–Z relationship. Two different components of DSD variability are added to the deterministic precipitation field. The first represents the systematic departures from the fixed R–Z relationship that are expected from different regimes of precipitation. This is generated using a simple broken-line model. The second represents the fluctuations around the R–Z relationship for a particular regime and uses a space–time multiplicative cascade model. The temporal structure of the stochastic fluctuations is investigated using disdrometer data. Assuming Taylor hypothesis, the spatial structure of the fluctuations is obtained and a stochastic model of the spatial distribution of the DSD variability is constructed. The consistency of the model is validated using concurrent radar and disdrometer data.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


Sign in / Sign up

Export Citation Format

Share Document