scholarly journals Scene Context Impairs Perception of Semantically Congruent Objects

2022 ◽  
pp. 095679762110326
Author(s):  
Eelke Spaak ◽  
Marius V. Peelen ◽  
Floris P. de Lange

Visual scene context is well-known to facilitate the recognition of scene-congruent objects. Interestingly, however, according to predictive-processing accounts of brain function, scene congruency may lead to reduced (rather than enhanced) processing of congruent objects, compared with incongruent ones, because congruent objects elicit reduced prediction-error responses. We tested this counterintuitive hypothesis in two online behavioral experiments with human participants ( N = 300). We found clear evidence for impaired perception of congruent objects, both in a change-detection task measuring response times and in a bias-free object-discrimination task measuring accuracy. Congruency costs were related to independent subjective congruency ratings. Finally, we show that the reported effects cannot be explained by low-level stimulus confounds, response biases, or top-down strategy. These results provide convincing evidence for perceptual congruency costs during scene viewing, in line with predictive-processing theory.

2020 ◽  
Author(s):  
Eelke Spaak ◽  
Marius V. Peelen ◽  
Floris P. de Lange

AbstractVisual scene context is well-known to facilitate the recognition of scene-congruent objects. Interestingly, however, according to the influential theory of predictive coding, scene congruency should lead to reduced (rather than enhanced) processing of congruent objects, compared to incongruent ones, since congruent objects elicit reduced prediction error responses. We tested this counterintuitive hypothesis in two online behavioural experiments with human participants (N = 300). We found clear evidence for impaired perception of congruent objects, both in a change detection task measuring response times as well as in a bias-free object discrimination task measuring accuracy. Congruency costs were related to independent subjective congruency ratings. Finally, we show that the reported effects cannot be explained by low-level stimulus confounds, response biases, or top-down strategy. These results provide convincing evidence for perceptual congruency costs during scene viewing, in line with predictive coding theory.Statement of RelevanceThe theory of the ‘Bayesian brain’, the idea that our brain is a hypothesis-testing machine, has become very influential over the past decades. A particularly influential formulation is the theory of predictive coding. This theory entails that stimuli that are expected, for instance because of the context in which they appear, generate a weaker neural response than unexpected stimuli. Scene context correctly ‘predicts’ congruent scene elements, which should result in lower prediction error. Our study tests this important, counterintuitive, and hitherto not fully tested, hypothesis. We find clear evidence in favour of it, and demonstrate that these ‘congruency costs’ are indeed evident in perception, and not limited to one particular task setting or stimulus set. Since perception in the real world is never of isolated objects, but always of entire scenes, these findings are important not just for the Bayesian brain hypothesis, but for our understanding of real-world visual perception in general.


2020 ◽  
Vol 43 ◽  
Author(s):  
Martina G. Vilas ◽  
Lucia Melloni

Abstract To become a unifying theory of brain function, predictive processing (PP) must accommodate its rich representational diversity. Gilead et al. claim such diversity requires a multi-process theory, and thus is out of reach for PP, which postulates a universal canonical computation. We contend this argument and instead propose that PP fails to account for the experiential level of representations.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 806
Author(s):  
Stephen Fox

Psychomotor experience can be based on what people predict they will experience, rather than on sensory inputs. It has been argued that disconnects between human experience and sensory inputs can be addressed better through further development of predictive processing theory. In this paper, the scope of predictive processing theory is extended through three developments. First, by going beyond previous studies that have encompassed embodied cognition but have not addressed some fundamental aspects of psychomotor functioning. Second, by proposing a scientific basis for explaining predictive processing that spans objective neuroscience and subjective experience. Third, by providing an explanation of predictive processing that can be incorporated into the planning and operation of systems involving robots and other new technologies. This is necessary because such systems are becoming increasingly common and move us farther away from the hunter-gatherer lifestyles within which our psychomotor functioning evolved. For example, beliefs that workplace robots are threatening can generate anxiety, while wearing hardware, such as augmented reality headsets and exoskeletons, can impede the natural functioning of psychomotor systems. The primary contribution of the paper is the introduction of a new formulation of hierarchical predictive processing that is focused on psychomotor functioning.


Vision ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 13
Author(s):  
Christian Valuch

Color can enhance the perception of relevant stimuli by increasing their salience and guiding visual search towards stimuli that match a task-relevant color. Using Continuous Flash Suppression (CFS), the current study investigated whether color facilitates the discrimination of targets that are difficult to perceive due to interocular suppression. Gabor patterns of two or four cycles per degree (cpd) were shown as targets to the non-dominant eye of human participants. CFS masks were presented at a rate of 10 Hz to the dominant eye, and participants had the task to report the target’s orientation as soon as they could discriminate it. The 2-cpd targets were robustly suppressed and resulted in much longer response times compared to 4-cpd targets. Moreover, only for 2-cpd targets, two color-related effects were evident. First, in trials where targets and CFS masks had different colors, targets were reported faster than in trials where targets and CFS masks had the same color. Second, targets with a known color, either cyan or yellow, were reported earlier than targets whose color was randomly cyan or yellow. The results suggest that the targets’ entry to consciousness may have been speeded by color-mediated effects relating to increased (bottom-up) salience and (top-down) task relevance.


2020 ◽  
Vol 32 (3) ◽  
pp. 527-545 ◽  
Author(s):  
Peter Kok ◽  
Lindsay I. Rait ◽  
Nicholas B. Turk-Browne

Recent work suggests that a key function of the hippocampus is to predict the future. This is thought to depend on its ability to bind inputs over time and space and to retrieve upcoming or missing inputs based on partial cues. In line with this, previous research has revealed prediction-related signals in the hippocampus for complex visual objects, such as fractals and abstract shapes. Implicit in such accounts is that these computations in the hippocampus reflect domain-general processes that apply across different types and modalities of stimuli. An alternative is that the hippocampus plays a more domain-specific role in predictive processing, with the type of stimuli being predicted determining its involvement. To investigate this, we compared hippocampal responses to auditory cues predicting abstract shapes (Experiment 1) versus oriented gratings (Experiment 2). We measured brain activity in male and female human participants using high-resolution fMRI, in combination with inverted encoding models to reconstruct shape and orientation information. Our results revealed that expectations about shape and orientation evoked distinct representations in the hippocampus. For complex shapes, the hippocampus represented which shape was expected, potentially serving as a source of top–down predictions. In contrast, for simple gratings, the hippocampus represented only unexpected orientations, more reminiscent of a prediction error. We discuss several potential explanations for this content-based dissociation in hippocampal function, concluding that the computational role of the hippocampus in predictive processing may depend on the nature and complexity of stimuli.


2015 ◽  
Vol 68 (2) ◽  
pp. 381-401 ◽  
Author(s):  
Filiz Çoşkun ◽  
Zeynep Ceyda Sayalı ◽  
Emine Gürbüz ◽  
Fuat Balcı

In the temporal bisection task, participants categorize experienced stimulus durations as short or long based on their similarity to previously acquired reference durations. Reward maximization in this task requires integrating endogenous timing uncertainty as well as exogenous probabilities of the reference durations into temporal judgements. We tested human participants on the temporal bisection task with different short and long reference duration probabilities (exogenous probability) in two separate test sessions. Incorrect categorizations were not penalized in Experiment 1 but were penalized in Experiment 2, leading to different levels of stringency in the reward functions that participants tried to maximize. We evaluated the judgements within the framework of optimality. Our participants adapted their choice behaviour in a nearly optimal fashion and earned nearly the maximum possible expected gain they could attain given their level of endogenous timing uncertainty and exogenous probabilities in both experiments. These results point to the optimality of human temporal risk assessment in the temporal bisection task. The long categorization response times (RTs) were overall faster than short categorization RTs, and short but not long categorization RTs were modulated by reference duration probability manipulations. These observations suggested an asymmetry between short and long categorizations in the temporal bisection task.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9414 ◽  
Author(s):  
David Bridges ◽  
Alain Pitiot ◽  
Michael R. MacAskill ◽  
Jonathan W. Peirce

Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimulus timing and response times, measured with a Black Box Toolkit. We compared a range of popular packages: PsychoPy, E-Prime®, NBS Presentation®, Psychophysics Toolbox, OpenSesame, Expyriment, Gorilla, jsPsych, Lab.js and Testable. Where possible, the packages were tested on Windows, macOS, and Ubuntu, and in a range of browsers for the online studies, to try to identify common patterns in performance. Among the lab-based experiments, Psychtoolbox, PsychoPy, Presentation and E-Prime provided the best timing, all with mean precision under 1 millisecond across the visual, audio and response measures. OpenSesame had slightly less precision across the board, but most notably in audio stimuli and Expyriment had rather poor precision. Across operating systems, the pattern was that precision was generally very slightly better under Ubuntu than Windows, and that macOS was the worst, at least for visual stimuli, for all packages. Online studies did not deliver the same level of precision as lab-based systems, with slightly more variability in all measurements. That said, PsychoPy and Gorilla, broadly the best performers, were achieving very close to millisecond precision on several browser/operating system combinations. For response times (measured using a high-performance button box), most of the packages achieved precision at least under 10 ms in all browsers, with PsychoPy achieving a precision under 3.5 ms in all. There was considerable variability between OS/browser combinations, especially in audio-visual synchrony which is the least precise aspect of the browser-based experiments. Nonetheless, the data indicate that online methods can be suitable for a wide range of studies, with due thought about the sources of variability that result. The results, from over 110,000 trials, highlight the wide range of timing qualities that can occur even in these dedicated software packages for the task. We stress the importance of scientists making their own timing validation measurements for their own stimuli and computer configuration.


Problemos ◽  
2019 ◽  
Vol 96 ◽  
pp. 148-159 ◽  
Author(s):  
Paulius Rimkevičius

The interpretive-sensory access (ISA) theory of self-knowledge claims that one knows one’s own mind by turning one’s capacity to know other minds onto oneself. Previously, researchers mostly debated whether the theory receives the most support from the results of empirical research. They have given much less attention to the question whether the theory is the simplest of the available alternatives. I argue that the question of simplicity should be considered in light of the well-established theories surrounding the ISA theory. I claim that the ISA theory then proves to be the simplest. I reply to objections to this claim related to recent developments in this area of research: the emergence of a unified transparency theory of self-knowledge and the relative establishment of the predictive processing theory.


2019 ◽  
Author(s):  
Beren Millidge

Fixational eye movements are ubiquitous and have a large impact on visual perception. Although their physical characteristics and, to some extent, neural underpinnings are well documented, their function, with the exception of preventing visual fading, remains poorly understood. In this paper, we propose that the visual system might utilize the relatively large number of similar slightly jittered images produced by fixational eye movements to help learn robust and spatially invariant representations as a form of neural data augmentation. Additionally, we form a link between effects such as retinal stabilization and predictive processing theory, and argue that they may be best explained under such a paradigm.


2020 ◽  
Author(s):  
James Antony ◽  
Caroline Stiver ◽  
Kathryn Nicole Graves ◽  
Jarryd Osborne ◽  
Nicholas Turk-Browne ◽  
...  

Theories of memory consolidation suggest that initially rich, vivid memories become more gist-like over time. However, it is unclear whether gist-like representations reflect a loss of detail through degradation or the blending of experiences into statistical averages, and whether the strength of these representations increases, decreases, or remains stable over time. We report three behavioral experiments that address these questions by examining distributional learning during spatial navigation. In Experiment 1, human participants navigated a virtual maze to find hidden objects with locations varying according to spatial distributions. After 15 minutes, 1 day, 7 days, or 28 days, we tested their navigation performance and explicit memory. In Experiment 2, we created spatial distributions with no object at their mean locations, thereby disentangling learned object exemplars from statistical averages. In Experiment 3, we created only a single, bimodal distribution to avoid possible confusion between distributions and administered tests after 15 minutes or 28 days. Across all experiments, and for both navigation and explicit tests, representations of the spatial distributions were present soon after exposure, but then receded over time. These findings help clarify the temporal dynamics of consolidation in human learning and memory.


Sign in / Sign up

Export Citation Format

Share Document