scholarly journals Choice-induced biases in perception

2016 ◽  
Author(s):  
Long Luu ◽  
Alan A Stocker

AbstractIllusions provide a great opportunity to study how perception is affected by both the observer's expectations and the way sensory information is represented1,2,3,4,5,6. Recently, Jazayeri and Movshon7 reported a new and interesting perceptual illusion, demonstrating that the perceived motion direction of a dynamic random dot stimulus is systematically biased when preceded by a motion discrimination judgment. The authors hypothesized that these biases emerge because the brain predominantly relies on those neurons that are most informative for solving the discrimination task8, but then is using the same neural weighting profile for generating the percept. In other words, they argue that these biases are “mistakes” of the brain, resulting from using inappropriate neural read-out weights. While we were able to replicate the illusion for a different visual stimulus (orientation), our new psychophysical data suggest that the above interpretation is likely incorrect: Biases are not caused by a read-out profile optimized for solving the discrimination task but rather by the specific choices subjects make in the discrimination task on any given trial. We formulate this idea as a conditioned Bayesian observer model and show that it can explain the new as well as the original psychophysical data. In this framework, the biases are not caused by mistake but rather by the brain's attempt to remain ‘self-consistent’ in its inference process. Our model establishes a direct connection between the current perceptual illusion and the well-known phenomena of cognitive consistency and dissonance9,10.

2017 ◽  
Author(s):  
Long Luu ◽  
Cheng Qiu ◽  
Alan A. Stocker

Ding et al. (1) recently proposed that the brain automatically encodes high-level, relative stimulus information (i.e. the ordinal relation between two lines), which it then uses to constrain the decoding of low-level, absolute stimulus features (i.e. when recalling the actual lines orientation). This is an interesting idea that is in line with the self-consistent Bayesian observer model (2, 3) and may have important implications for understanding how the brain processes sensory information. However, the notion suggested in Ding et al. (1) that the brain uses this decoding strategy because it improves perceptual performance is misleading. Here we clarify the decoding model and compare its perceptual performance under various noise and signal conditions.


2020 ◽  
Author(s):  
Madeline S. Cappelloni ◽  
Sabyasachi Shivkumar ◽  
Ralf M. Haefner ◽  
Ross K. Maddox

ABSTRACTThe brain combines information from multiple sensory modalities to interpret the environment. Multisensory integration is often modeled by ideal Bayesian causal inference, a model proposing that perceptual decisions arise from a statistical weighting of information from each sensory modality based on its reliability and relevance to the observer’s task. However, ideal Bayesian causal inference fails to describe human behavior in a simultaneous auditory spatial discrimination task in which spatially aligned visual stimuli improve performance despite providing no information about the correct response. This work tests the hypothesis that humans weight auditory and visual information in this task based on their relative reliabilities, even though the visual stimuli are task-uninformative, carrying no information about the correct response, and should be given zero weight. Listeners perform an auditory spatial discrimination task with relative reliabilities modulated by the stimulus durations. By comparing conditions in which task-uninformative visual stimuli are spatially aligned with auditory stimuli or centrally located (control condition), listeners are shown to have a larger multisensory effect when their auditory thresholds are worse. Even in cases in which visual stimuli are not task-informative, the brain combines sensory information that is scene-relevant, especially when the task is difficult due to unreliable auditory information.


2021 ◽  
Author(s):  
Margot C Bjoring ◽  
C Daniel Meliza

Sensory input provides incomplete and often misleading information about the physical world. To compensate, the brain uses internal models to predict what the inputs should be from context, experience, and innate biases. For example, when speech is interrupted by noise, humans perceive the missing sounds behind the noise, a perceptual illusion known as phonemic (or auditory) restoration. The neural mechanisms allowing the auditory system to generate predictions that override ascending sensory information remain poorly understood. Here, we show that the zebra finch (Taeniopygia guttata) exhibits auditory restoration of conspecific song both in a behavioral task and in neural recordings from the equivalent of auditory cortex. Decoding the responses of a population of single units to occluded songs reveals the spectrotemporal structure of the missing syllables. Surprisingly, restoration occurs under anesthesia and for songs that the bird has not heard. These results show that an internal model of the general structure of conspecific vocalizations can bias sensory processing without attention.


1999 ◽  
Vol 13 (2) ◽  
pp. 117-125 ◽  
Author(s):  
Laurence Casini ◽  
Françoise Macar ◽  
Marie-Hélène Giard

Abstract The experiment reported here was aimed at determining whether the level of brain activity can be related to performance in trained subjects. Two tasks were compared: a temporal and a linguistic task. An array of four letters appeared on a screen. In the temporal task, subjects had to decide whether the letters remained on the screen for a short or a long duration as learned in a practice phase. In the linguistic task, they had to determine whether the four letters could form a word or not (anagram task). These tasks allowed us to compare the level of brain activity obtained in correct and incorrect responses. The current density measures recorded over prefrontal areas showed a relationship between the performance and the level of activity in the temporal task only. The level of activity obtained with correct responses was lower than that obtained with incorrect responses. This suggests that a good temporal performance could be the result of an efficacious, but economic, information-processing mechanism in the brain. In addition, the absence of this relation in the anagram task results in the question of whether this relation is specific to the processing of sensory information only.


Author(s):  
Ann-Sophie Barwich

How much does stimulus input shape perception? The common-sense view is that our perceptions are representations of objects and their features and that the stimulus structures the perceptual object. The problem for this view concerns perceptual biases as responsible for distortions and the subjectivity of perceptual experience. These biases are increasingly studied as constitutive factors of brain processes in recent neuroscience. In neural network models the brain is said to cope with the plethora of sensory information by predicting stimulus regularities on the basis of previous experiences. Drawing on this development, this chapter analyses perceptions as processes. Looking at olfaction as a model system, it argues for the need to abandon a stimulus-centred perspective, where smells are thought of as stable percepts, computationally linked to external objects such as odorous molecules. Perception here is presented as a measure of changing signal ratios in an environment informed by expectancy effects from top-down processes.


Author(s):  
Filippo Ghin ◽  
Louise O’Hare ◽  
Andrea Pavan

AbstractThere is evidence that high-frequency transcranial random noise stimulation (hf-tRNS) is effective in improving behavioural performance in several visual tasks. However, so far there has been limited research into the spatial and temporal characteristics of hf-tRNS-induced facilitatory effects. In the present study, electroencephalogram (EEG) was used to investigate the spatial and temporal dynamics of cortical activity modulated by offline hf-tRNS on performance on a motion direction discrimination task. We used EEG to measure the amplitude of motion-related VEPs over the parieto-occipital cortex, as well as oscillatory power spectral density (PSD) at rest. A time–frequency decomposition analysis was also performed to investigate the shift in event-related spectral perturbation (ERSP) in response to the motion stimuli between the pre- and post-stimulation period. The results showed that the accuracy of the motion direction discrimination task was not modulated by offline hf-tRNS. Although the motion task was able to elicit motion-dependent VEP components (P1, N2, and P2), none of them showed any significant change between pre- and post-stimulation. We also found a time-dependent increase of the PSD in alpha and beta bands regardless of the stimulation protocol. Finally, time–frequency analysis showed a modulation of ERSP power in the hf-tRNS condition for gamma activity when compared to pre-stimulation periods and Sham stimulation. Overall, these results show that offline hf-tRNS may induce moderate aftereffects in brain oscillatory activity.


2004 ◽  
Vol 27 (3) ◽  
pp. 377-396 ◽  
Author(s):  
Rick Grush

The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Helen Feigin ◽  
Shira Baror ◽  
Moshe Bar ◽  
Adam Zaidel

AbstractPerceptual decisions are biased by recent perceptual history—a phenomenon termed 'serial dependence.' Here, we investigated what aspects of perceptual decisions lead to serial dependence, and disambiguated the influences of low-level sensory information, prior choices and motor actions. Participants discriminated whether a brief visual stimulus lay to left/right of the screen center. Following a series of biased ‘prior’ location discriminations, subsequent ‘test’ location discriminations were biased toward the prior choices, even when these were reported via different motor actions (using different keys), and when the prior and test stimuli differed in color. By contrast, prior discriminations about an irrelevant stimulus feature (color) did not substantially influence subsequent location discriminations, even though these were reported via the same motor actions. Additionally, when color (not location) was discriminated, a bias in prior stimulus locations no longer influenced subsequent location discriminations. Although low-level stimuli and motor actions did not trigger serial-dependence on their own, similarity of these features across discriminations boosted the effect. These findings suggest that relevance across perceptual decisions is a key factor for serial dependence. Accordingly, serial dependence likely reflects a high-level mechanism by which the brain predicts and interprets new incoming sensory information in accordance with relevant prior choices.


Sign in / Sign up

Export Citation Format

Share Document