Pupil Drift Rate Indexes Groove Ratings

2021 ◽  
Author(s):  
Connor Spiech ◽  
George Sioros ◽  
Tor Endestad ◽  
Anne Danielsen ◽  
Bruno Laeng

Groove, understood as a pleasurable compulsion to move to musical rhythms, typically varies along an inverted U-curve with increasing rhythmic complexity (e.g., syncopation, pickups). Predictive coding accounts posit that moderate complexity drives us to move to reduce sensory prediction errors and model the temporal structure. While musicologists generally distinguish the effects of pickups (anacruses) and syncopations, their difference remains unexplored in groove. We used pupillometry as an index to noradrenergic arousal while subjects listened to and rated drumbeats varying in rhythmic complexity. We replicated the inverted U-shaped relationship between rhythmic complexity and groove and showed this is modulated by musical ability, based on a psychoacoustic beat perception test. The pupil drift rates suggest that groovier rhythms hold attention longer than ones rated less groovy. Moreover, we found complementary effects of syncopations and pickups on groove ratings and pupil size, respectively, discovering a distinct predictive process related to pickups. We suggest that the brain deploys attention to pickups to sharpen subsequent strong beats, augmenting the predictive scaffolding’s focus on beats that reduce syncopations’ prediction errors. This interpretation is in accordance with groove envisioned as an embodied resolution of precision-weighted prediction error.

2018 ◽  
Author(s):  
Françoise Lecaignard ◽  
Olivier Bertrand ◽  
Anne Caclin ◽  
Jérémie Mattout

AbstractPerceptual processes are shaped by past sensory experiences, raising the question of contextual adaptation of information processing in the brain. In predictive coding, the tuning of precision weights (PW) applying to sensory prediction errors (PE) enables such adaptation. We test this hypothesis with an original auditory oddball design to recover the respective neurophysiological encoding of PW and PE. Predictability of sound sequences was manipulated without the participants’ knowledge to influence each quantity differentially. Using highly-informed EEG-MEG, trial-to-trial learning models were employed together with dynamic causal models (DCM) of deviance responses. Modelling results revealed Bayesian learning within a fronto-temporal network. Essentially, we show an automatic adaptation of learning and underlying connectivity to the contextual informational content. Predictability yielded a more informed learning with larger PW and lower PE, signaled by a decreased neuronal self-inhibition, and a decreased forward connectivity, respectively. These findings strongly support the predictive coding account of perception with automatic contextual adaptation.


2020 ◽  
Author(s):  
Dongjae Kim ◽  
Jaeseung Jeong ◽  
Sang Wan Lee

AbstractThe goal of learning is to maximize future rewards by minimizing prediction errors. Evidence have shown that the brain achieves this by combining model-based and model-free learning. However, the prediction error minimization is challenged by a bias-variance tradeoff, which imposes constraints on each strategy’s performance. We provide new theoretical insight into how this tradeoff can be resolved through the adaptive control of model-based and model-free learning. The theory predicts the baseline correction for prediction error reduces the lower bound of the bias–variance error by factoring out irreducible noise. Using a Markov decision task with context changes, we showed behavioral evidence of adaptive control. Model-based behavioral analyses show that the prediction error baseline signals context changes to improve adaptability. Critically, the neural results support this view, demonstrating multiplexed representations of prediction error baseline within the ventrolateral and ventromedial prefrontal cortex, key brain regions known to guide model-based and model-free learning.One sentence summaryA theoretical, behavioral, computational, and neural account of how the brain resolves the bias-variance tradeoff during reinforcement learning is described.


Brain ◽  
2019 ◽  
Vol 142 (3) ◽  
pp. 662-673 ◽  
Author(s):  
Aaron L Wong ◽  
Cherie L Marvel ◽  
Jordan A Taylor ◽  
John W Krakauer

Abstract Systematic perturbations in motor adaptation tasks are primarily countered by learning from sensory-prediction errors, with secondary contributions from other learning processes. Despite the availability of these additional processes, particularly the use of explicit re-aiming to counteract observed target errors, patients with cerebellar degeneration are surprisingly unable to compensate for their sensory-prediction error deficits by spontaneously switching to another learning mechanism. We hypothesized that if the nature of the task was changed—by allowing vision of the hand, which eliminates sensory-prediction errors—patients could be induced to preferentially adopt aiming strategies to solve visuomotor rotations. To test this, we first developed a novel visuomotor rotation paradigm that provides participants with vision of their hand in addition to the cursor, effectively setting the sensory-prediction error signal to zero. We demonstrated in younger healthy control subjects that this promotes a switch to strategic re-aiming based on target errors. We then showed that with vision of the hand, patients with cerebellar degeneration could also switch to an aiming strategy in response to visuomotor rotations, performing similarly to age-matched participants (older controls). Moreover, patients could retrieve their learned aiming solution after vision of the hand was removed (although they could not improve beyond what they retrieved), and retain it for at least 1 year. Both patients and older controls, however, exhibited impaired overall adaptation performance compared to younger healthy controls (age 18–33 years), likely due to age-related reductions in spatial and working memory. Patients also failed to generalize, i.e. they were unable to adopt analogous aiming strategies in response to novel rotations. Hence, there appears to be an inescapable obligatory dependence on sensory-prediction error-based learning—even when this system is impaired in patients with cerebellar disease. The persistence of sensory-prediction error-based learning effectively suppresses a switch to target error-based learning, which perhaps explains the unexpectedly poor performance by patients with cerebellar degeneration in visuomotor adaptation tasks.


2013 ◽  
Vol 36 (3) ◽  
pp. 221-221 ◽  
Author(s):  
Lars Muckli ◽  
Lucy S. Petro ◽  
Fraser W. Smith

AbstractClark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).


2019 ◽  
Author(s):  
J. Haarsma ◽  
P.C. Fletcher ◽  
J.D. Griffin ◽  
H.J. Taverne ◽  
H. Ziauddeen ◽  
...  

AbstractRecent theories of cortical function construe the brain as performing hierarchical Bayesian inference. According to these theories, the precision of cortical unsigned prediction error (i.e., surprise) signals plays a key role in learning and decision-making, to be controlled by dopamine, and to contribute to the pathogenesis of psychosis. To test these hypotheses, we studied learning with variable outcome-precision in healthy individuals after dopaminergic modulation and in patients with early psychosis. Behavioural computational modelling indicated that precision-weighting of unsigned prediction errors benefits learning in health, and is impaired in psychosis. FMRI revealed coding of unsigned prediction errors relative to their precision in bilateral superior frontal gyri and dorsal anterior cingulate, which was perturbed by dopaminergic modulation, impaired in psychosis, and associated with task performance and schizotypy. We conclude that precision-weighting of cortical prediction error signals is a key mechanism through which dopamine modulates inference and contributes to the pathogenesis of psychosis.


2021 ◽  
Author(s):  
Robert Hoskin ◽  
Deborah Talmi

Background: To reduce the computational demands of the task of determining values, the brain is thought to engage in adaptive coding, where the sensitivity of some neurons to value is modulated by contextual information. There is good behavioural evidence that pain is coded adaptively, but controversy regarding the underlying neural mechanism. Additionally, there is evidence that reward prediction errors are coded adaptively, but no parallel evidence regarding pain prediction errors. Methods: We tested the hypothesis that pain prediction errors are coded adaptively by scanning 19 healthy adults with fMRI while they performed a cued pain task. Our analysis followed an axiomatic approach. Results: We found that the left anterior insula was the only region which was sensitive both to predicted pain magnitudes and the unexpectedness of pain delivery, but not to the magnitude of delivered pain. Conclusions: This pattern suggests that the left anterior insula is part of a neural mechanism that serves the adaptive prediction error of pain.


2019 ◽  
Author(s):  
Cooper A. Smout ◽  
Matthew F. Tang ◽  
Marta I. Garrido ◽  
Jason B. Mattingley

AbstractThe human brain is thought to optimise the encoding of incoming sensory information through two principal mechanisms: prediction uses stored information to guide the interpretation of forthcoming sensory events, and attention prioritizes these events according to their behavioural relevance. Despite the ubiquitous contributions of attention and prediction to various aspects of perception and cognition, it remains unknown how they interact to modulate information processing in the brain. A recent extension of predictive coding theory suggests that attention optimises the expected precision of predictions by modulating the synaptic gain of prediction error units. Since prediction errors code for the difference between predictions and sensory signals, this model would suggest that attention increases the selectivity for mismatch information in the neural response to a surprising stimulus. Alternative predictive coding models proposes that attention increases the activity of prediction (or ‘representation’) neurons, and would therefore suggest that attention and prediction synergistically modulate selectivity for feature information in the brain. Here we applied multivariate forward encoding techniques to neural activity recorded via electroencephalography (EEG) as human observers performed a simple visual task, to test for the effect of attention on both mismatch and feature information in the neural response to surprising stimuli. Participants attended or ignored a periodic stream of gratings, the orientations of which could be either predictable, surprising, or unpredictable. We found that surprising stimuli evoked neural responses that were encoded according to the difference between predicted and observed stimulus features, and that attention facilitated the encoding of this type of information in the brain. These findings advance our understanding of how attention and prediction modulate information processing in the brain, and support the theory that attention optimises precision expectations during hierarchical inference by increasing the gain of prediction errors.


2018 ◽  
Author(s):  
Anna C Sales ◽  
Karl J. Friston ◽  
Matthew W. Jones ◽  
Anthony E. Pickering ◽  
Rosalyn J. Moran

AbstractThe locus coeruleus (LC) in the pons is the major source of noradrenaline (NA) in the brain. Two modes of LC firing have been associated with distinct cognitive states: changes in tonic rates of firing are correlated with global levels of arousal and behavioural flexibility, whilst phasic LC responses are evoked by salient stimuli. Here, we unify these two modes of firing by modelling the response of the LC as a correlate of a prediction error when inferring states for action planning under Active Inference (AI).We simulate a classic Go/No-go reward learning task and a three-arm foraging task and show that, if LC activity is considered to reflect the magnitude of high level ‘state-action’ prediction errors, then both tonic and phasic modes of firing are emergent features of belief updating. We also demonstrate that when contingencies change, AI agents can update their internal models more quickly by feeding back this state-action prediction error – reflected in LC firing and noradrenaline release – to optimise learning rate, enabling large adjustments over short timescales. We propose that such prediction errors are mediated by cortico-LC connections, whilst ascending input from LC to cortex modulates belief updating in anterior cingulate cortex (ACC).In short, we characterise the LC/ NA system within a general theory of brain function. In doing so, we show that contrasting, behaviour-dependent firing patterns are an emergent property of the LC’s crucial role in translating prediction errors into an optimal mediation between plasticity and stability.Author SummaryThe brain uses sensory information to build internal models and make predictions about the world. When errors of prediction occur, models must be updated to ensure desired outcomes are still achieved. Neuromodulator chemicals provide a possible pathway for triggering such changes in brain state. One such neuromodulator, noradrenaline, originates predominantly from a cluster of neurons in the brainstem – the locus coeruleus (LC) – and plays a key role in behaviour, for instance, in determining the balance between exploiting or exploring the environment.Here we use Active Inference (AI), a mathematical model of perception and action, to formally describe LC function. We propose that LC activity is triggered by errors in prediction and that the subsequent release of noradrenaline alters the rate of learning about the environment. Biologically, this describes an LC-cortex feedback loop promoting behavioural flexibility in times of uncertainty. We model LC output as a simulated animal performs two tasks known to elicit archetypal responses. We find that experimentally observed ‘phasic’ and ‘tonic’ patterns of LC activity emerge naturally, and that modulation of learning rates improves task performance. This provides a simple, unified computational account of noradrenergic computational function within a general model of behaviour.


2020 ◽  
Vol 30 (10) ◽  
pp. 5204-5217
Author(s):  
Adrien Witon ◽  
Amirali Shirazibehehsti ◽  
Jennifer Cooke ◽  
Alberto Aviles ◽  
Ram Adapa ◽  
...  

Abstract Two important theories in cognitive neuroscience are predictive coding (PC) and the global workspace (GW) theory. A key research task is to understand how these two theories relate to one another, and particularly, how the brain transitions from a predictive early state to the eventual engagement of a brain-scale state (the GW). To address this question, we present a source-localization of EEG responses evoked by the local-global task—an experimental paradigm that engages a predictive hierarchy, which encompasses the GW. The results of our source reconstruction suggest three phases of processing. The first phase involves the sensory (here auditory) regions of the superior temporal lobe and predicts sensory regularities over a short timeframe (as per the local effect). The third phase is brain-scale, involving inferior frontal, as well as inferior and superior parietal regions, consistent with a global neuronal workspace (GNW; as per the global effect). Crucially, our analysis suggests that there is an intermediate (second) phase, involving modulatory interactions between inferior frontal and superior temporal regions. Furthermore, sedation with propofol reduces modulatory interactions in the second phase. This selective effect is consistent with a PC explanation of sedation, with propofol acting on descending predictions of the precision of prediction errors; thereby constraining access to the GNW.


2020 ◽  
Vol 32 (1) ◽  
pp. 124-140 ◽  
Author(s):  
Hyojeong Kim ◽  
Margaret L. Schlichting ◽  
Alison R. Preston ◽  
Jarrod A. Lewis-Peacock

The human brain constantly anticipates the future based on memories of the past. Encountering a familiar situation reactivates memory of previous encounters, which can trigger a prediction of what comes next to facilitate responsiveness. However, a prediction error can lead to pruning of the offending memory, a process that weakens its representation in the brain and leads to forgetting. Our goal in this study was to evaluate whether memories are spared from such pruning in situations that allow for accurate predictions at the categorical level, despite prediction errors at the item level. Participants viewed a sequence of objects, some of which reappeared multiple times (“cues”), followed always by novel items. Half of the cues were followed by new items from different (unpredictable) categories, while others were followed by new items from a single (predictable) category. Pattern classification of fMRI data was used to identify category-specific predictions after each cue. Pruning was observed only in unpredictable contexts, while encoding of new items was less robust in predictable contexts. These findings demonstrate that how associative memories are updated is influenced by the reliability of abstract-level predictions in familiar contexts.


Sign in / Sign up

Export Citation Format

Share Document