scholarly journals Expectation violations produce error signals in mouse V1

2022 ◽  
Author(s):  
Byron H Price ◽  
Cambria M Jensen ◽  
Anthony A Khoudary ◽  
Jeffrey P Gavornik

Repeated exposure to visual sequences changes the form of evoked activity in the primary visual cortex (V1). Predictive coding theory provides a potential explanation for this, namely that plasticity shapes cortical circuits to encode spatiotemporal predictions and that subsequent responses are modulated by the degree to which actual inputs match these expectations. Here we use a recently developed statistical modeling technique called Model-Based Targeted Dimensionality Reduction (MbTDR) to study visually-evoked dynamics in mouse V1 in context of a previously described experimental paradigm called "sequence learning". We report that evoked spiking activity changed significantly with training, in a manner generally consistent with the predictive coding framework. Neural responses to expected stimuli were suppressed in a late window (100-150ms) after stimulus onset following training, while responses to novel stimuli were not. Omitting predictable stimuli led to increased firing at the expected time of stimulus onset, but only in trained mice. Substituting a novel stimulus for a familiar one led to changes in firing that persisted for at least 300ms. In addition, we show that spiking data can be used to accurately decode time within the sequence. Our findings are consistent with the idea that plasticity in early visual circuits is involved in coding spatiotemporal information.

2007 ◽  
Vol 98 (4) ◽  
pp. 2110-2121 ◽  
Author(s):  
Stephanie A. McMains ◽  
Hilda M. Fehd ◽  
Tatiana-Aloi Emmanouil ◽  
Sabine Kastner

Selective attention modulates neural activity in the visual system both in the presence and in the absence of visual stimuli. When subjects direct attention to a particular location in a visual scene in anticipation of the stimulus onset, there is an increase in baseline activity. How do such baseline increases relate to the attentional modulation of stimulus-driven activity? Using functional magnetic resonance imaging, we demonstrate that baseline increases related to the expectation of motion or color stimuli at a peripheral target location do not predict the modulation of neural responses evoked by these stimuli when attended. In areas such as MT and TEO that were more effectively activated by one stimulus type than the other, attentional modulation of visually evoked activity depended on the stimulus preference of a visual area and was stronger for the effective than for the noneffective stimulus. In contrast, baseline increases did not reflect the stimulus preference of a visual area. Rather, these signals were shown to be spatially specific and appeared to be dominated by the location information and not by the feature information of the cue with the experimental paradigms under study. These findings provide evidence that baseline increases in visual cortex during cue periods do not reflect the activation of a memory template that includes particular stimulus properties of the expected target, but rather carry information about the location of an expected target stimulus. In addition, when the stimulus contained both color and motion, an object-based attention effect was observed, with significant attentional modulation in the area that responded preferentially to the unattended feature.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Betina Korka ◽  
Erich Schröger ◽  
Andreas Widmann

AbstractOur brains continuously build and update predictive models of the world, sources of prediction being drawn for example from sensory regularities and/or our own actions. Yet, recent results in the auditory system indicate that stochastic regularities may not be easily encoded when a rare medium pitch deviant is presented between frequent high and low pitch standard sounds in random order, as reflected in the lack of sensory prediction error event-related potentials [i.e., mismatch negativity (MMN)]. We wanted to test the implication of the predictive coding theory that predictions based on higher-order generative models—here, based on action intention, are fed top-down in the hierarchy to sensory levels. Participants produced random sequences of high and low pitch sounds by button presses in two conditions: In a “specific” condition, one button produced high and the other low pitch sounds; in an “unspecific” condition, both buttons randomly produced high or low-pitch sounds. Rare medium pitch deviants elicited larger MMN and N2 responses in the “specific” compared to the “unspecific” condition, despite equal sound probabilities. These results thus demonstrate that action-effect predictions can boost stochastic regularity-based predictions and engage higher-order deviance detection processes, extending previous notions on the role of action predictions at sensory levels.


2021 ◽  
Author(s):  
Tao Yu ◽  
Shihui Han

Perceived cues signaling others' pain induce empathy that in turn motivates altruistic behavior toward those who appear suffering. This perception-emotion-behavior reactivity is the core of human altruism but does not always occur in real life situations. Here, by integrating behavioral and multimodal neuroimaging measures, we investigate neural mechanisms underlying the functional role of beliefs of others' pain in modulating empathy and altruism. We show evidence that decreasing (or enhancing) beliefs of others' pain reduces (or increases) subjective estimation of others' painful emotional states and monetary donations to those who show pain expressions. Moreover, decreasing beliefs of others' pain attenuates neural responses to perceived cues signaling others' pain within 200 ms after stimulus onset and modulate neural responses to others' pain in the frontal cortices and temporoparietal junction. Our findings highlight beliefs of others' pain as a fundamental cognitive basis of human empathy and altruism and unravel the intermediate neural architecture.


2017 ◽  
Vol 372 (1714) ◽  
pp. 20160105 ◽  
Author(s):  
Rosy Southwell ◽  
Anna Baumann ◽  
Cécile Gal ◽  
Nicolas Barascud ◽  
Karl Friston ◽  
...  

In this series of behavioural and electroencephalography (EEG) experiments, we investigate the extent to which repeating patterns of sounds capture attention. Work in the visual domain has revealed attentional capture by statistically predictable stimuli, consistent with predictive coding accounts which suggest that attention is drawn to sensory regularities. Here, stimuli comprised rapid sequences of tone pips, arranged in regular (REG) or random (RAND) patterns. EEG data demonstrate that the brain rapidly recognizes predictable patterns manifested as a rapid increase in responses to REG relative to RAND sequences. This increase is reminiscent of the increase in gain on neural responses to attended stimuli often seen in the neuroimaging literature, and thus consistent with the hypothesis that predictable sequences draw attention. To study potential attentional capture by auditory regularities, we used REG and RAND sequences in two different behavioural tasks designed to reveal effects of attentional capture by regularity. Overall, the pattern of results suggests that regularity does not capture attention. This article is part of the themed issue ‘Auditory and visual scene analysis’.


2017 ◽  
Vol 115 (1) ◽  
pp. 186-191 ◽  
Author(s):  
Matthew Chalk ◽  
Olivier Marre ◽  
Gašper Tkačik

A central goal in theoretical neuroscience is to predict the response properties of sensory neurons from first principles. To this end, “efficient coding” posits that sensory neurons encode maximal information about their inputs given internal constraints. There exist, however, many variants of efficient coding (e.g., redundancy reduction, different formulations of predictive coding, robust coding, sparse coding, etc.), differing in their regimes of applicability, in the relevance of signals to be encoded, and in the choice of constraints. It is unclear how these types of efficient coding relate or what is expected when different coding objectives are combined. Here we present a unified framework that encompasses previously proposed efficient coding models and extends to unique regimes. We show that optimizing neural responses to encode predictive information can lead them to either correlate or decorrelate their inputs, depending on the stimulus statistics; in contrast, at low noise, efficiently encoding the past always predicts decorrelation. Later, we investigate coding of naturalistic movies and show that qualitatively different types of visual motion tuning and levels of response sparsity are predicted, depending on whether the objective is to recover the past or predict the future. Our approach promises a way to explain the observed diversity of sensory neural responses, as due to multiple functional goals and constraints fulfilled by different cell types and/or circuits.


Author(s):  
Paul J. Whalen ◽  
Maital Neta ◽  
M. Justin Kim ◽  
Alison M. Mattek ◽  
F. C. Davis ◽  
...  

When it comes to being social, there is no other nonverbal environmental cue that is more important for humans than the facial expression of another person. Here we consider facial expressions as naturally conditioned stimuli that, when presented as images in an experimental paradigm, evoke neural and behavioral responses that serve to decipher the predictive meaning of the expression. We will cover data showing that the expressions of others alter our attention to the environment, our biases in interpreting these facial expressions, and our neural responses within an amygdala-prefrontal circuitry related to normal variations in reported anxiety.


2020 ◽  
Author(s):  
Yingcan Carol Wang ◽  
Ediz Sohoglu ◽  
Rebecca A. Gilbert ◽  
Richard N. Henson ◽  
Matthew H. Davis

AbstractHuman listeners achieve quick and effortless speech comprehension through computations of conditional probability using Bayes rule. However, the neural implementation of Bayesian perceptual inference remains unclear. Competitive-selection accounts (e.g. TRACE) propose that word recognition is achieved through direct inhibitory connections between units representing candidate words that share segments (e.g. hygiene and hijack share /haid3/). Manipulations that increase lexical uncertainty should increase neural responses associated with word recognition when words cannot be uniquely identified (during the first syllable). In contrast, predictive-selection accounts (e.g. Predictive-Coding) proposes that spoken word recognition involves comparing heard and predicted speech sounds and using prediction error to update lexical representations. Increased lexical uncertainty in words like hygiene and hijack will increase prediction error and hence neural activity only at later time points when different segments are predicted (during the second syllable). We collected MEG data to distinguish these two mechanisms and used a competitor priming manipulation to change the prior probability of specific words. Lexical decision responses showed delayed recognition of target words (hygiene) following presentation of a neighbouring prime word (hijack) several minutes earlier. However, this effect was not observed with pseudoword primes (higent) or targets (hijure). Crucially, MEG responses in the STG showed greater neural responses for word-primed words after the point at which they were uniquely identified (after /haid3/ in hygiene) but not before while similar changes were again absent for pseudowords. These findings are consistent with accounts of spoken word recognition in which neural computations of prediction error play a central role.Significance StatementEffective speech perception is critical to daily life and involves computations that combine speech signals with prior knowledge of spoken words; that is, Bayesian perceptual inference. This study specifies the neural mechanisms that support spoken word recognition by testing two distinct implementations of Bayes perceptual inference. Most established theories propose direct competition between lexical units such that inhibition of irrelevant candidates leads to selection of critical words. Our results instead support predictive-selection theories (e.g. Predictive-Coding): by comparing heard and predicted speech sounds, neural computations of prediction error can help listeners continuously update lexical probabilities, allowing for more rapid word identification.


2018 ◽  
Author(s):  
Hirokata Fukushima

Recent studies on interoception emphasize the importance of multisensory integration between interoception and exteroception. One of the methods frequently applied for assessing interoceptive sensitivity is the heartbeat discrimination task, where individuals judge whether the timing of external stimuli (e.g., tones) are synchronized to their own heartbeat. Despite its extensive use in research, the neural dynamics underlying the temporal matching between interoceptive and exteroceptive stimuli in this task have remained unclear. The present study used electroencephalography (EEG) to examine the neural responses of healthy participants who performed a heartbeat discrimination task. We analyzed the differences between EEG responses to tones, which were likely to be perceived as “heartbeat-synchronous” (200 ms delayed from the R-wave) or “heartbeat-asynchronous” (0 ms delayed). Possible associations of these neural differentiations with task performance were also investigated. Compared with the responses to heartbeat-asynchronous tones, heartbeat-synchronous tones caused a relative decrease in early gamma-band EEG response and an increase in later P2 event-related potential (ERP) amplitude. Condition differences in the EEG/ERP measures were not significantly correlated with the behavioral measures. The mechanisms underlying the observed neural responses and the possibility of electrophysiological measurement of interoceptive sensitivity are discussed in terms of two perspectives: the predictive coding framework and the cardiac-phase-dependent baroreceptor function.


Sign in / Sign up

Export Citation Format

Share Document