scholarly journals Perceptual decisions are biased toward relevant prior choices

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Helen Feigin ◽  
Shira Baror ◽  
Moshe Bar ◽  
Adam Zaidel

AbstractPerceptual decisions are biased by recent perceptual history—a phenomenon termed 'serial dependence.' Here, we investigated what aspects of perceptual decisions lead to serial dependence, and disambiguated the influences of low-level sensory information, prior choices and motor actions. Participants discriminated whether a brief visual stimulus lay to left/right of the screen center. Following a series of biased ‘prior’ location discriminations, subsequent ‘test’ location discriminations were biased toward the prior choices, even when these were reported via different motor actions (using different keys), and when the prior and test stimuli differed in color. By contrast, prior discriminations about an irrelevant stimulus feature (color) did not substantially influence subsequent location discriminations, even though these were reported via the same motor actions. Additionally, when color (not location) was discriminated, a bias in prior stimulus locations no longer influenced subsequent location discriminations. Although low-level stimuli and motor actions did not trigger serial-dependence on their own, similarity of these features across discriminations boosted the effect. These findings suggest that relevance across perceptual decisions is a key factor for serial dependence. Accordingly, serial dependence likely reflects a high-level mechanism by which the brain predicts and interprets new incoming sensory information in accordance with relevant prior choices.

2019 ◽  
Author(s):  
Helen Feigin ◽  
Shira Baror ◽  
Moshe Bar ◽  
Adam Zaidel

Perceptual decisions are biased by recent perceptual history, a phenomenon termed 'serial dependence.' Using a visual location discrimination task, we investigated what aspects of perceptual decisions lead to serial dependence, and disambiguated the influences of low-level sensory information, prior choices and motor actions on subsequent perceptual decisions. Following several biased (prior) location discriminations, subsequent (test) discriminations were biased toward the prior choices, even when reported via different motor actions, and when prior and test stimuli differed in color. By contrast, biased discriminations about an irrelevant stimulus feature did not substantially influence subsequent location discriminations. Additionally, biased stimulus locations, when color was discriminated, no longer substantially influenced subsequent location decisions. Hence, the degree of relevance between prior and subsequent perceptual decisions is a key factor for serial dependence. This suggests that serial-dependence reflects a high-level mechanism by which the brain predicts and interprets incoming sensory information in accordance with relevant prior choices.


2017 ◽  
Author(s):  
Long Luu ◽  
Cheng Qiu ◽  
Alan A. Stocker

Ding et al. (1) recently proposed that the brain automatically encodes high-level, relative stimulus information (i.e. the ordinal relation between two lines), which it then uses to constrain the decoding of low-level, absolute stimulus features (i.e. when recalling the actual lines orientation). This is an interesting idea that is in line with the self-consistent Bayesian observer model (2, 3) and may have important implications for understanding how the brain processes sensory information. However, the notion suggested in Ding et al. (1) that the brain uses this decoding strategy because it improves perceptual performance is misleading. Here we clarify the decoding model and compare its perceptual performance under various noise and signal conditions.


2021 ◽  
pp. 1-15
Author(s):  
Leor Zmigrod

Abstract Ideological behavior has traditionally been viewed as a product of social forces. Nonetheless, an emerging science suggests that ideological worldviews can also be understood in terms of neural and cognitive principles. The article proposes a neurocognitive model of ideological thinking, arguing that ideological worldviews may be manifestations of individuals’ perceptual and cognitive systems. This model makes two claims. First, there are neurocognitive antecedents to ideological thinking: the brain’s low-level neurocognitive dispositions influence its receptivity to ideological doctrines. Second, there are neurocognitive consequences to ideological engagement: strong exposure and adherence to ideological doctrines can shape perceptual and cognitive systems. This article details the neurocognitive model of ideological thinking and synthesizes the empirical evidence in support of its claims. The model postulates that there are bidirectional processes between the brain and the ideological environment, and so it can address the roles of situational and motivational factors in ideologically motivated action. This endeavor highlights that an interdisciplinary neurocognitive approach to ideologies can facilitate biologically informed accounts of the ideological brain and thus reveal who is most susceptible to extreme and authoritarian ideologies. By investigating the relationships between low-level perceptual processes and high-level ideological attitudes, we can develop a better grasp of our collective history as well as the mechanisms that may structure our political futures.


2020 ◽  
Author(s):  
Haider Al-Tahan ◽  
Yalda Mohsenzadeh

AbstractWhile vision evokes a dense network of feedforward and feedback neural processes in the brain, visual processes are primarily modeled with feedforward hierarchical neural networks, leaving the computational role of feedback processes poorly understood. Here, we developed a generative autoencoder neural network model and adversarially trained it on a categorically diverse data set of images. We hypothesized that the feedback processes in the ventral visual pathway can be represented by reconstruction of the visual information performed by the generative model. We compared representational similarity of the activity patterns in the proposed model with temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) visual brain responses. The proposed generative model identified two segregated neural dynamics in the visual brain. A temporal hierarchy of processes transforming low level visual information into high level semantics in the feedforward sweep, and a temporally later dynamics of inverse processes reconstructing low level visual information from a high level latent representation in the feedback sweep. Our results append to previous studies on neural feedback processes by presenting a new insight into the algorithmic function and the information carried by the feedback processes in the ventral visual pathway.Author summaryIt has been shown that the ventral visual cortex consists of a dense network of regions with feedforward and feedback connections. The feedforward path processes visual inputs along a hierarchy of cortical areas that starts in early visual cortex (an area tuned to low level features e.g. edges/corners) and ends in inferior temporal cortex (an area that responds to higher level categorical contents e.g. faces/objects). Alternatively, the feedback connections modulate neuronal responses in this hierarchy by broadcasting information from higher to lower areas. In recent years, deep neural network models which are trained on object recognition tasks achieved human-level performance and showed similar activation patterns to the visual brain. In this work, we developed a generative neural network model that consists of encoding and decoding sub-networks. By comparing this computational model with the human brain temporal (magnetoencephalography) and spatial (functional magnetic resonance imaging) response patterns, we found that the encoder processes resemble the brain feedforward processing dynamics and the decoder shares similarity with the brain feedback processing dynamics. These results provide an algorithmic insight into the spatiotemporal dynamics of feedforward and feedback processes in biological vision.


2018 ◽  
Vol 29 (8) ◽  
pp. 3380-3389
Author(s):  
Timothy J Andrews ◽  
Ryan K Smith ◽  
Richard L Hoggart ◽  
Philip I N Ulrich ◽  
Andre D Gouws

Abstract Individuals from different social groups interpret the world in different ways. This study explores the neural basis of these group differences using a paradigm that simulates natural viewing conditions. Our aim was to determine if group differences could be found in sensory regions involved in the perception of the world or were evident in higher-level regions that are important for the interpretation of sensory information. We measured brain responses from 2 groups of football supporters, while they watched a video of matches between their teams. The time-course of response was then compared between individuals supporting the same (within-group) or the different (between-group) team. We found high intersubject correlations in low-level and high-level regions of the visual brain. However, these regions of the brain did not show any group differences. Regions that showed higher correlations for individuals from the same group were found in a network of frontal and subcortical brain regions. The interplay between these regions suggests a range of cognitive processes from motor control to social cognition and reward are important in the establishment of social groups. These results suggest that group differences are primarily reflected in regions involved in the evaluation and interpretation of the sensory input.


2021 ◽  
Author(s):  
Meng Liu ◽  
Wenshan Dong ◽  
Shaozheng Qin ◽  
Tom Verguts ◽  
Qi Chen

AbstractHuman perception and learning is thought to rely on a hierarchical generative model that is continuously updated via precision-weighted prediction errors (pwPEs). However, the neural basis of such cognitive process and how it unfolds during decision making, remain poorly understood. To investigate this question, we combined a hierarchical Bayesian model (i.e., Hierarchical Gaussian Filter, HGF) with electrophysiological (EEG) recording, while participants performed a probabilistic reversal learning task in alternatingly stable and volatile environments. Behaviorally, the HGF fitted significantly better than two control, non-hierarchical, models. Neurally, low-level and high-level pwPEs were independently encoded by the P300 component. Low-level pwPEs were reflected in the theta (4-8 Hz) frequency band, but high-level pwPEs were not. Furthermore, the expressions of high-level pwPEs were stronger for participants with better HGF fit. These results indicate that the brain employs hierarchical learning, and encodes both low- and high-level learning signals separately and adaptively.


2017 ◽  
Author(s):  
Joel Zylberberg

AbstractTo study sensory representations, neuroscientists record neural activities while presenting different stimuli to the animal. From these data, we identify neurons whose activities depend systematically on each aspect of the stimulus. These neurons are said to be “tuned” to that stimulus feature. It is typically assumed that these tuned neurons represent the stimulus feature in their firing, whereas any “untuned” neurons do not contribute to its representation. Recent experimental work questioned this assumption, showing that in some circumstances, neurons that are untuned to a particular stimulus feature can contribute to its representation. These findings suggest that, by ignoring untuned neurons, our understanding of population coding might be incomplete. At the same time, several key questions remain unanswered: Are the impacts of untuned neurons on population coding due to weak tuning that is nevertheless below the threshold the experimenters set for calling neurons tuned (vs untuned)? Do these effects hold for different population sizes and/or correlation structures? And could neural circuit function ever benefit from having some untuned neurons vs having all neurons be tuned to the stimulus? Using theoretical calculations and analyses of in vivo neural data, I answer those questions by: a) showing how, in the presence of correlated variability, untuned neurons can enhance sensory information coding, for a variety of population sizes and correlation structures; b) demonstrating that this effect does not rely on weak tuning; and c) identifying conditions under which the neural code can be made more informative by replacing some of the tuned neurons with untuned ones. These conditions specify when there is a functional benefit to having untuned neurons.Author SummaryIn the visual system, most neurons’ firing rates are tuned to various aspects of the stimulus (motion, contrast, etc.). For each stimulus feature, however some neurons appear to be untuned: their firing rates do not depend on that stimulus feature. Previous work on information coding in neural populations ignored untuned neurons, assuming that only the neurons tuned to a given stimulus feature contribute to its encoding. Recent experimental work questioned this assumption, showing that neurons with no apparent tuning can sometimes contribute to information coding. However, key questions remain unanswered. First, how do the untuned neurons contribute to information coding, and could this effect rely on those neurons having weak tuning that was overlooked? Second, does the function of a neural circuit ever benefit from having some neurons untuned? Or should every neuron be tuned (even weakly) to every stimulus feature? Here, I use mathematical calculations and analyses of data from the mouse visual cortex to answer those questions. First, I show how (and why) correlations between neurons enable the untuned neurons to contribute to information coding. Second, I show that neural populations can often do a better job of encoding a given stimulus feature when some of the neurons are untuned for that stimulus feature. Thus, it may be best for the brain to segregate its tuning, leaving some neurons untuned for each stimulus feature. Along with helping to explain how the brain processes external stimuli, this work has strong implications for attempts to decode brain signals, to control brain-machine interfaces: better performance could be obtained if the activities of all neurons are decoded, as opposed to only those with strong tuning.


2021 ◽  
Author(s):  
Ro Julia Robotham ◽  
Sheila Kerry ◽  
Grace E Rice ◽  
Alex Leff ◽  
Matt Lambon Ralph ◽  
...  

Much of the patient literature on the visual recognition of faces, words and objects is based on single case studies of patients selected according to their symptom profile. The Back of the Brain project aims to provide novel insights into the cerebral and cortical architecture underlying visual recognition of complex stimuli by adopting a different approach. A large group of patients was recruited according to their lesion location (in the areas supplied by the posterior cerebral artery) rather than their symptomatology. All patients were assessed with the same battery of sensitive tests of visual perception enabling the identification of dissociations as well as associations between deficits in face, word and object recognition. This paper provides a detailed description of the extensive behavioural test battery that was developed for the Back of the Brain project and that enables assessment of low-level, intermediate and high-level visual perceptual abilities. •Extensive behavioural test battery for assessing low-level, intermediate and high-level visual perception in patients with posterior cerebral artery stroke •Method enabling direct comparison of visual face, word and object processing abilities in patients with posterior cerebral artery stroke


2017 ◽  
Vol 114 (43) ◽  
pp. E9115-E9124 ◽  
Author(s):  
Stephanie Ding ◽  
Christopher J. Cueva ◽  
Misha Tsodyks ◽  
Ning Qian

When a stimulus is presented, its encoding is known to progress from low- to high-level features. How these features are decoded to produce perception is less clear, and most models assume that decoding follows the same low- to high-level hierarchy of encoding. There are also theories arguing for global precedence, reversed hierarchy, or bidirectional processing, but they are descriptive without quantitative comparison with human perception. Moreover, observers often inspect different parts of a scene sequentially to form overall perception, suggesting that perceptual decoding requires working memory, yet few models consider how working-memory properties may affect decoding hierarchy. We probed decoding hierarchy by comparing absolute judgments of single orientations and relative/ordinal judgments between two sequentially presented orientations. We found that lower-level, absolute judgments failed to account for higher-level, relative/ordinal judgments. However, when ordinal judgment was used to retrospectively decode memory representations of absolute orientations, striking aspects of absolute judgments, including the correlation and forward/backward aftereffects between two reported orientations in a trial, were explained. We propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding.


Sign in / Sign up

Export Citation Format

Share Document