Visual Scene Processing in Familiar and Unfamiliar Environments

2007 ◽  
Vol 97 (5) ◽  
pp. 3670-3683 ◽  
Author(s):  
Russell A. Epstein ◽  
J. Stephen Higgins ◽  
Karen Jablonski ◽  
Alana M. Feiler

Humans and animals use information obtained from the local visual scene to orient themselves in the wider world. Although neural systems involved in scene perception have been identified, the extent to which processing in these systems is affected by previous experience is unclear. We addressed this issue by scanning subjects with functional magnetic resonance imaging (fMRI) while they viewed photographs of familiar and unfamiliar locations. Scene-selective regions in parahippocampal cortex (the parahippocampal place area, or PPA), retrosplenial cortex (RSC), and the transverse occipital sulcus (TOS) responded more strongly to images of familiar locations than to images of unfamiliar locations with the strongest effects (>50% increase) in RSC. Examination of fMRI repetition suppression (RS) effects indicated that images of familiar and unfamiliar locations were processed with the same degree of viewpoint specificity; however, increased viewpoint invariance was observed as individual scenes became more familiar over the course of a scan session. Surprisingly, these within-scan-session viewpoint-invariant RS effects were only observed when scenes were repeated across different trials but not when scenes were repeated within a trial, suggesting that within- and between-trial RS effects may index different aspects of visual scene processing. The sensitivity to environmental familiarity observed in the PPA, RSC, and TOS supports earlier claims that these regions mediate the extraction of navigationally relevant spatial information from visual scenes. As locations become familiar, the neural representations of these locations become enriched, but the viewpoint invariance of these representations does not change.

2017 ◽  
Author(s):  
Misun Kim ◽  
Eleanor A. Maguire

AbstractHumans commonly operate within 3D environments such as multi-floor buildings and yet there is a surprising dearth of studies that have examined how these spaces are represented in the brain. Here we had participants learn the locations of paintings within a virtual multi-level gallery building and then used behavioural tests and fMRI repetition suppression analyses to investigate how this 3D multi-compartment space was represented, and whether there was a bias in encoding vertical and horizontal information. We found faster response times for within-room egocentric spatial judgments and behavioural priming effects of visiting the same room, providing evidence for a compartmentalised representation of space. At the neural level, we observed a hierarchical encoding of 3D spatial information, with left anterior hippocampus representing local information within a room, while retrosplenial cortex, parahippocampal cortex and posterior hippocampus represented room information within the wider building. Of note, both our behavioural and neural findings showed that vertical and horizontal location information was similarly encoded, suggesting an isotropic representation of 3D space even in the context of a multi-compartment environment. These findings provide much-needed information about how the human brain supports spatial memory and navigation in buildings with numerous levels and rooms.


2020 ◽  
Author(s):  
Shao-Fang Wang ◽  
Valerie A. Carr ◽  
Serra E. Favila ◽  
Jeremy N. Bailenson ◽  
Thackery I. Brown ◽  
...  

AbstractThe hippocampus (HC) and surrounding medial temporal lobe (MTL) cortical regions play a critical role in spatial navigation and episodic memory. However, it remains unclear how the interaction between the HC’s conjunctive coding and mnemonic differentiation contributes to neural representations of spatial environments. Multivariate functional magnetic resonance imaging (fMRI) analyses enable examination of how human HC and MTL cortical regions encode multidimensional spatial information to support memory-guided navigation. We combined high-resolution fMRI with a virtual navigation paradigm in which participants relied on memory of the environment to navigate to goal locations in two different virtual rooms. Within each room, participants were cued to navigate to four learned locations, each associated with one of two reward values. Pattern similarity analysis revealed that when participants successfully arrived at goal locations, activity patterns in HC and parahippocampal cortex (PHC) represented room-goal location conjunctions and activity patterns in HC subfields represented room-reward-location conjunctions. These results add to an emerging literature revealing hippocampal conjunctive representations during goal-directed behavior.


2013 ◽  
Vol 25 (6) ◽  
pp. 961-968 ◽  
Author(s):  
Rachel E. Ganaden ◽  
Caitlin R. Mullin ◽  
Jennifer K. E. Steeves

Traditionally, it has been theorized that the human visual system identifies and classifies scenes in an object-centered approach, such that scene recognition can only occur once key objects within a scene are identified. Recent research points toward an alternative approach, suggesting that the global image features of a scene are sufficient for the recognition and categorization of a scene. We have previously shown that disrupting object processing with repetitive TMS to object-selective cortex enhances scene processing possibly through a release of inhibitory mechanisms between object and scene pathways [Mullin, C. R., & Steeves, J. K. E. TMS to the lateral occipital cortex disrupts object processing but facilitates scene processing. Journal of Cognitive Neuroscience, 23, 4174–4184, 2011]. Here we show the effects of TMS to the transverse occipital sulcus (TOS), an area implicated in scene perception, on scene and object processing. TMS was delivered to the TOS or the vertex (control site) while participants performed an object and scene natural/nonnatural categorization task. Transiently interrupting the TOS resulted in significantly lower accuracies for scene categorization compared with control conditions. This demonstrates a causal role of the TOS in scene processing and indicates its importance, in addition to the parahippocampal place area and retrosplenial cortex, in the scene processing network. Unlike TMS to object-selective cortex, which facilitates scene categorization, disrupting scene processing through stimulation of the TOS did not affect object categorization. Further analysis revealed a higher proportion of errors for nonnatural scenes that led us to speculate that the TOS may be involved in processing the higher spatial frequency content of a scene. This supports a nonhierarchical model of scene recognition.


2017 ◽  
Vol 372 (1714) ◽  
pp. 20160102 ◽  
Author(s):  
Iris I. A. Groen ◽  
Edward H. Silson ◽  
Chris I. Baker

Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’.


2020 ◽  
Author(s):  
Yaelan Jung ◽  
Dirk B. Walther

AbstractNatural scenes deliver rich sensory information about the world. Decades of research has shown that the scene-selective network in the visual cortex represents various aspects of scenes. It is, however, unknown how such complex scene information is processed beyond the visual cortex, such as in the prefrontal cortex. It is also unknown how task context impacts the process of scene perception, modulating which scene content is represented in the brain. In this study, we investigate these questions using scene images from four natural scene categories, which also depict two types of global scene properties, temperature (warm or cold), and sound-level (noisy or quiet). A group of healthy human subjects from both sexes participated in the present study using fMRI. In the study, participants viewed scene images under two different task conditions; temperature judgment and sound-level judgment. We analyzed how different scene attributes (scene categories, temperature, and sound-level information) are represented across the brain under these task conditions. Our findings show that global scene properties are only represented in the brain, especially in the prefrontal cortex, when they are task-relevant. However, scene categories are represented in the brain, in both the parahippocampal place area and the prefrontal cortex, regardless of task context. These findings suggest that the prefrontal cortex selectively represents scene content according to task demands, but this task selectivity depends on the types of scene content; task modulates neural representations of global scene properties but not of scene categories.


Author(s):  
Bin Wang ◽  
Tianyi Yan ◽  
Jinglong Wu

Face perception is considered the most developed visual perceptual skill in humans. Functional magnetic resonance imaging (fMRI) studies have graphically illustrated that multiple regions exhibit a stronger neural response to faces than to other visual object categories, which were specialized for face processing. These regions are in the lateral side of the fusiform gyrus, the “fusiform face area” or FFA, in the inferior occipital gyri, the “occipital face area” or OFA, and in the superior temporal sulcus (pSTS). These regions are supposed to perform the visual analysis of faces and appear to participate differentially in different types of face perception. An important question is how faces are represented within these areas. In this chapter, the authors review the function, interaction, and topography of these regions relevant to face perception. They also discuss the human neural systems that mediate face perception and attempt to show some research dictions for face perception and neural representations.


2020 ◽  
Vol 32 (10) ◽  
pp. 2013-2023
Author(s):  
John M. Henderson ◽  
Jessica E. Goold ◽  
Wonil Choi ◽  
Taylor R. Hayes

During real-world scene perception, viewers actively direct their attention through a scene in a controlled sequence of eye fixations. During each fixation, local scene properties are attended, analyzed, and interpreted. What is the relationship between fixated scene properties and neural activity in the visual cortex? Participants inspected photographs of real-world scenes in an MRI scanner while their eye movements were recorded. Fixation-related fMRI was used to measure activation as a function of lower- and higher-level scene properties at fixation, operationalized as edge density and meaning maps, respectively. We found that edge density at fixation was most associated with activation in early visual areas, whereas semantic content at fixation was most associated with activation along the ventral visual stream including core object and scene-selective areas (lateral occipital complex, parahippocampal place area, occipital place area, and retrosplenial cortex). The observed activation from semantic content was not accounted for by differences in edge density. The results are consistent with active vision models in which fixation gates detailed visual analysis for fixated scene regions, and this gating influences both lower and higher levels of scene analysis.


Author(s):  
Chris Eliasmith

This article describes the neural engineering framework (NEF), a systematic approach to studying neural systems that has collected and extended a set of consistent methods that are highly general. The NEF draws heavily on past work in theoretical neuroscience, integrating work on neural coding, population representation, and neural dynamics to enable the construction of large-scale biologically plausible neural simulations. It is based on the principles that neural representations defined by a combination of nonlinear encoding and optimal linear decoding and that neural dynamics are characterized by considering neural representations as control theoretic state variables.


2019 ◽  
Vol 30 (3) ◽  
pp. 1260-1271 ◽  
Author(s):  
He Chen ◽  
Yuji Naya

Abstract While the hippocampus (HPC) is a prime candidate combining object identity and location due to its strong connections to the ventral and dorsal pathways via surrounding medial temporal lobe (MTL) areas, recent physiological studies have reported spatial information in the ventral pathway and its downstream target in MTL. However, it remains unknown whether the object–location association proceeds along the ventral MTL pathway before HPC. To address this question, we recorded neuronal activity from MTL and area anterior inferotemporal cortex (TE) of two macaques gazing at an object to retain its identity and location in each trial. The results showed significant effects of object–location association at a single-unit level in TE, perirhinal cortex (PRC), and HPC, but not in the parahippocampal cortex. Notably, a clear area difference emerged in the association form: 1) representations of object identity were added to those of subjects’ viewing location in TE; 2) PRC signaled both the additive form and the conjunction of the two inputs; and 3) HPC signaled only the conjunction signal. These results suggest that the object and location signals are combined stepwise at TE and PRC each time primates view an object, and PRC may provide HPC with the conjunctional signal, which might be used for encoding episodic memory.


Sign in / Sign up

Export Citation Format

Share Document