scene processing
Recently Published Documents


TOTAL DOCUMENTS

135
(FIVE YEARS 46)

H-INDEX

21
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Xenia Grande ◽  
Magdalena Sauvage ◽  
Andreas Becke ◽  
Emrah Duzel ◽  
David Berron

Cortical processing streams for item and contextual information come together in the entorhinal-hippocampal circuitry. Various evidence suggest that information-specific pathways organize the cortical — entorhinal interaction and the circuitry's inner communication along the transversal axis. Here, we leveraged ultra-high field functional imaging and advance Maass, Berron et al. (2015) who report two functional routes segregating the entorhinal cortex (EC) and subiculum. Our data show specific scene processing in the functionally connected posterior-medial EC and distal subiculum. The regions of another route, that connects the anterior-lateral EC and a newly identified retrosplenial-based anterior-medial EC subregion with the CA1/subiculum border, process object and scene information similarly. Our results support topographical information flow in human entorhinal-hippocampal subregions with organized convergence of cortical processing streams and a unique route for contextual information. They characterize the functional organization of the circuitry and underpin its central role in memory function and pathological decline.


2021 ◽  
Author(s):  
Andrey Chetverikov ◽  
Árni Kristjánsson

Prominent theories of perception suggest that the brain builds probabilistic models of the world, assessing the statistics of the visual input to inform this construction. However, the evidence for this idea is often based on simple impoverished stimuli, and the results have often been discarded as an illusion reflecting simple "summary statistics" of visual inputs. Here we show that the visual system represents probabilistic distributions of complex heterogeneous stimuli. Importantly, we show how these statistical representations are integrated with representations of other features and bound to locations, and can therefore serve as building blocks for object and scene processing. We uncover the organization of these representations at different spatial scales by showing how expectations for incoming features are biased by neighboring locations. We also show that there is not only a bias, but also a skew in the representations, arguing against accounts positing that probabilistic representations are discarded in favor of simplified summary statistics (e.g., mean and variance). In sum, our results reveal detailed probabilistic encoding of stimulus distributions, representations that are bound with other features and to particular locations.


2021 ◽  
pp. 1-12
Author(s):  
Daniel Kaiser ◽  
Radoslaw M. Cichy

Abstract During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part–whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part–whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part–whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.


2021 ◽  
Author(s):  
Daniel Kaiser ◽  
Radoslaw M. Cichy

During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: by dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their consistent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene’s part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.


2021 ◽  
Vol 21 (9) ◽  
pp. 2973
Author(s):  
Tess Durham ◽  
Elissa Aminoff
Keyword(s):  

2021 ◽  
Vol 21 (9) ◽  
pp. 2080
Author(s):  
Gary C.-W. Shyi ◽  
Peter K.-H. Cheng ◽  
Vivian T.-Y. Peng ◽  
Cody L.-S. Wang ◽  
S.-T. Tina Huang

2021 ◽  
Vol 21 (9) ◽  
pp. 2107
Author(s):  
Vivian T.-Y. Peng ◽  
Peter K.-H. Cheng ◽  
Cody L.-S. Wang ◽  
Gary C.-W. Shyi ◽  
S.-T. Tina Huang

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Lauren E. Welbourne ◽  
Aditya Jonnalagadda ◽  
Barry Giesbrecht ◽  
Miguel P. Eckstein

AbstractTo optimize visual search, humans attend to objects with the expected size of the sought target relative to its surrounding scene (object-scene scale consistency). We investigate how the human brain responds to variations in object-scene scale consistency. We use functional magnetic resonance imaging and a voxel-wise feature encoding model to estimate tuning to different object/scene properties. We find that regions involved in scene processing (transverse occipital sulcus) and spatial attention (intraparietal sulcus) have the strongest responsiveness and selectivity to object-scene scale consistency: reduced activity to mis-scaled objects (either unusually smaller or larger). The findings show how and where the brain incorporates object-scene size relationships in the processing of scenes. The response properties of these brain areas might explain why during visual search humans often miss objects that are salient but at atypical sizes relative to the surrounding scene.


Sign in / Sign up

Export Citation Format

Share Document