scholarly journals Prefrontal cortex represents heuristics that shape choice bias and its integration into future behavior

2020 ◽  
Author(s):  
Gabriela Mochol ◽  
Roozbeh Kiani ◽  
Rubén Moreno-Bote

SummaryGoal-directed behavior requires integrating sensory information with prior knowledge about the environment. Behavioral biases that arise from these priors could increase positive outcomes when the priors match the true structure of the environment, but mismatches also happen frequently and could cause unfavorable outcomes. Biases that reduce gains and fail to vanish with training indicate fundamental suboptimalities arising from ingrained heuristics of the brain. Here, we report systematic, gain-reducing choice biases in highly-trained monkeys performing a motion direction discrimination task where only the current stimulus is behaviorally relevant. The monkey’s bias fluctuated at two distinct time scales: slow, spanning tens to hundreds of trials, and fast, arising from choices and outcomes of the most recent trials. Our finding enabled single trial prediction of biases, which influenced the choice especially on trials with weak stimuli. The pre-stimulus activity of neuronal ensembles in the monkey prearcuate gyrus represented these biases as an offset along the decision axis in the state space. This offset persisted throughout the stimulus viewing period, when sensory information was integrated, leading to a biased choice. The pre-stimulus representation of history-dependent bias was functionally indistinguishable from the neural representation of upcoming choice before stimulus onset, validating our model of single-trial biases and suggesting that pre-stimulus representation of choice could be fully defined by biases inferred from behavioral history. Our results indicate that the prearcuate gyrus reflects intrinsic heuristics that compute bias signals, as well as the mechanisms that integrate them into the oculomotor decision-making process.

2019 ◽  
Author(s):  
Malte Wöstmann ◽  
Mohsen Alavash ◽  
Jonas Obleser

AbstractIn principle, selective attention is the net result of target selection and distractor suppression. The way in which both mechanisms are implemented neurally has remained contested. Neural oscillatory power in the alpha frequency band (~10 Hz) has been implicated in the selection of to-be-attended targets, but there is lack of empirical evidence for its involvement in the suppression of to-be-ignored distractors. Here, we use electroencephalography (EEG) recordings of N = 33 human participants (males and females) to test the pre-registered hypothesis that alpha power directly relates to distractor suppression and thus operates independently from target selection. In an auditory spatial pitch discrimination task, we modulated the location (left vs right) of either a target or a distractor tone sequence, while fixing the other in the front. When the distractor was fixed in the front, alpha power relatively decreased contralaterally to the target and increased ipsilaterally. Most importantly, when the target was fixed in the front, alpha lateralization reversed in direction for the suppression of distractors on the left versus right. These data show that target-selection–independent alpha power modulation is involved in distractor suppression. While both lateralized alpha responses for selection and for suppression proved reliable, they were uncorrelated and distractor-related alpha power emerged from more anterior, frontal cortical regions. Lending functional significance to suppression-related alpha oscillations, alpha lateralization at the individual, single-trial level was predictive of behavioral accuracy. These results fuel a renewed look at neurobiological accounts of selection-independent suppressive filtering in attention.Significance statementAlthough well-established models of attention rest on the assumption that irrelevant sensory information is filtered out, the neural implementation of such a filter mechanism is unclear. Using an auditory attention task that decouples target selection from distractor suppression, we demonstrate that two sign-reversed lateralized alpha responses reflect target selection versus distractor suppression. Critically, these alpha responses are reliable, independent of each other, and generated in more anterior, frontal regions for suppression versus selection. Prediction of single-trial task performance from alpha modulation after stimulus onset agrees with the view that alpha modulation bears direct functional relevance as a neural implementation of attention. Results demonstrate that the neurobiological foundation of attention implies a selection-independent alpha oscillatory mechanism to suppress distraction.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2461
Author(s):  
Alexander Kuc ◽  
Vadim V. Grubov ◽  
Vladimir A. Maksimenko ◽  
Natalia Shusharina ◽  
Alexander N. Pisarchik ◽  
...  

Perceptual decision-making requires transforming sensory information into decisions. An ambiguity of sensory input affects perceptual decisions inducing specific time-frequency patterns on EEG (electroencephalogram) signals. This paper uses a wavelet-based method to analyze how ambiguity affects EEG features during a perceptual decision-making task. We observe that parietal and temporal beta-band wavelet power monotonically increases throughout the perceptual process. Ambiguity induces high frontal beta-band power at 0.3–0.6 s post-stimulus onset. It may reflect the increasing reliance on the top-down mechanisms to facilitate accumulating decision-relevant sensory features. Finally, this study analyzes the perceptual process using mixed within-trial and within-subject design. First, we found significant percept-related changes in each subject and then test their significance at the group level. Thus, observed beta-band biomarkers are pronounced in single EEG trials and may serve as control commands for brain-computer interface (BCI).


2011 ◽  
Vol 23 (12) ◽  
pp. 3972-3982 ◽  
Author(s):  
Mathias Scharinger ◽  
William J. Idsardi ◽  
Samantha Poe

Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral–medial, anterior–posterior, and inferior–superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom–up information but crucially involves featural–phonetic top–down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.


2020 ◽  
Vol 7 (8) ◽  
pp. 190228 ◽  
Author(s):  
Quan Wan ◽  
Ying Cai ◽  
Jason Samaha ◽  
Bradley R. Postle

How does the neural representation of visual working memory content vary with behavioural priority? To address this, we recorded electroencephalography (EEG) while subjects performed a continuous-performance 2-back working memory task with oriented-grating stimuli. We tracked the transition of the neural representation of an item ( n ) from its initial encoding, to the status of ‘unprioritized memory item' (UMI), and back to ‘prioritized memory item', with multivariate inverted encoding modelling. Results showed that the representational format was remapped from its initially encoded format into a distinctive ‘opposite' representational format when it became a UMI and then mapped back into its initial format when subsequently prioritized in anticipation of its comparison with item n + 2. Thus, contrary to the default assumption that the activity representing an item in working memory might simply get weaker when it is deprioritized, it may be that a process of priority-based remapping helps to protect remembered information when it is not in the focus of attention.


2013 ◽  
Vol 26 (5) ◽  
pp. 483-502 ◽  
Author(s):  
Antonia Thelen ◽  
Micah M. Murray

This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (∼100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formedviasingle-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions thus persist over time to impact memory retrieval and object discrimination.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Leila Drissi-Daoudi ◽  
Adrien Doerig ◽  
Michael H. Herzog

Abstract Sensory information must be integrated over time to perceive, for example, motion and melodies. Here, to study temporal integration, we used the sequential metacontrast paradigm in which two expanding streams of lines are presented. When a line in one stream is offset observers perceive all other lines to be offset too, even though they are straight. When more lines are offset the offsets integrate mandatorily, i.e., observers cannot report the individual offsets. We show that mandatory integration lasts for up to 450 ms, depending on the observer. Importantly, integration occurs only when offsets are presented within a discrete window of time. Even stimuli that are in close spatio-temporal proximity do not integrate if they are in different windows. A window of integration starts with stimulus onset and integration in the next window has similar characteristics. We present a two-stage computational model based on discrete time windows that captures these effects.


2010 ◽  
Vol 104 (5) ◽  
pp. 2573-2585 ◽  
Author(s):  
Dirk Kerzel ◽  
Sabine Born ◽  
David Souto

It is known that visual transients prolong saccadic latency and reduce saccadic frequency. The latter effect was attributed to subcortical structures because it occurred only 60–70 ms after stimulus onset. We examined the effects of large task-irrelevant transients on steady-state pursuit and the generation of catch-up saccades. Two screen-wide stripes of equal contrast (4, 20, or 100%) were briefly flashed at equal eccentricities (3, 6, or 12°) from the pursuit target. About 100 ms after flash onset, we observed that pursuit gain dropped by 6–12% and catch-up saccades were entirely suppressed. The relatively long latency of the inhibition suggests that it results from cortical mechanisms that may act by promoting fixation or the deployment of attention over the visual field. In addition, we show that a loud irrelevant sound is able to generate the same inhibition of saccades as visual transients, whereas it only induces a weak modulation of pursuit gain, indicating a privileged access of acoustic information to the saccadic system. Finally, irrelevant changes in motion direction orthogonal to pursuit had a smaller and later inhibitory effect.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Guilhem Ibos ◽  
David J Freedman

Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for).


2012 ◽  
Vol 107 (3) ◽  
pp. 890-901 ◽  
Author(s):  
Michael Dimitriou ◽  
David W. Franklin ◽  
Daniel M. Wolpert

Optimal feedback control postulates that feedback responses depend on the task relevance of any perturbations. We test this prediction in a bimanual task, conceptually similar to balancing a laden tray, in which each hand could be perturbed up or down. Single-limb mechanical perturbations produced long-latency reflex responses (“rapid motor responses”) in the contralateral limb of appropriate direction and magnitude to maintain the tray horizontal. During bimanual perturbations, rapid motor responses modulated appropriately depending on the extent to which perturbations affected tray orientation. Specifically, despite receiving the same mechanical perturbation causing muscle stretch, the strongest responses were produced when the contralateral arm was perturbed in the opposite direction (large tray tilt) rather than in the same direction or not perturbed at all. Rapid responses from shortening extensors depended on a nonlinear summation of the sensory information from the arms, with the response to a bimanual same-direction perturbation (orientation maintained) being less than the sum of the component unimanual perturbations (task relevant). We conclude that task-dependent tuning of reflexes can be modulated online within a single trial based on a complex interaction across the arms.


2016 ◽  
Author(s):  
Long Luu ◽  
Alan A Stocker

AbstractIllusions provide a great opportunity to study how perception is affected by both the observer's expectations and the way sensory information is represented1,2,3,4,5,6. Recently, Jazayeri and Movshon7 reported a new and interesting perceptual illusion, demonstrating that the perceived motion direction of a dynamic random dot stimulus is systematically biased when preceded by a motion discrimination judgment. The authors hypothesized that these biases emerge because the brain predominantly relies on those neurons that are most informative for solving the discrimination task8, but then is using the same neural weighting profile for generating the percept. In other words, they argue that these biases are “mistakes” of the brain, resulting from using inappropriate neural read-out weights. While we were able to replicate the illusion for a different visual stimulus (orientation), our new psychophysical data suggest that the above interpretation is likely incorrect: Biases are not caused by a read-out profile optimized for solving the discrimination task but rather by the specific choices subjects make in the discrimination task on any given trial. We formulate this idea as a conditioned Bayesian observer model and show that it can explain the new as well as the original psychophysical data. In this framework, the biases are not caused by mistake but rather by the brain's attempt to remain ‘self-consistent’ in its inference process. Our model establishes a direct connection between the current perceptual illusion and the well-known phenomena of cognitive consistency and dissonance9,10.


Sign in / Sign up

Export Citation Format

Share Document