scholarly journals Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific

2015 ◽  
Vol 35 (36) ◽  
pp. 12412-12424 ◽  
Author(s):  
A. Stigliani ◽  
K. S. Weiner ◽  
K. Grill-Spector
2018 ◽  
Author(s):  
Anthony Stigliani ◽  
Brianna Jeska ◽  
Kalanit Grill-Spector

ABSTRACTHow do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-tomoment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions respond to both sustained and transient components of the visual input. Responses to sustained stimuli exhibit adaptation, whereas responses to transient stimuli are surprisingly larger for stimulus offsets than onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations.AUTHOR SUMMARYHow does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging (fMRI) and a temporal encoding model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input. Regions in lateral temporal cortex process visual transients associated with stimulus onsets and offsets but not the unchanging aspects of the visual input. That is, they compute moment-to-moment changes in the visual input. In contrast, regions in ventral temporal cortex process both stable and transient components, with the former exhibiting adaptation. Surprisingly, in these ventral regions responses to stimulus offsets were larger than onsets. We suggest that the former may reflect a memory trace of the stimulus, when it is no longer visible, and the latter may reflect rapid processing of new items at stimulus onset. Together, these findings (i) reveal a fundamental temporal mechanism that distinguishes visual streams and (ii) highlight both the importance and utility of modeling brain responses with millisecond precision to understand the temporal dynamics of neural computations in the human brain.


2007 ◽  
Vol 98 (1) ◽  
pp. 382-393 ◽  
Author(s):  
Thomas J. McKeeff ◽  
David A. Remus ◽  
Frank Tong

Behavioral studies have shown that object recognition becomes severely impaired at fast presentation rates, indicating a limitation in temporal processing capacity. Here, we studied whether this behavioral limit in object recognition reflects limitations in the temporal processing capacity of early visual areas tuned to basic features or high-level areas tuned to complex objects. We used functional MRI (fMRI) to measure the temporal processing capacity of multiple areas along the ventral visual pathway progressing from the primary visual cortex (V1) to high-level object-selective regions, specifically the fusiform face area (FFA) and parahippocampal place area (PPA). Subjects viewed successive images of faces or houses at presentation rates varying from 2.3 to 37.5 items/s while performing an object discrimination task. Measures of the temporal frequency response profile of each visual area revealed a systematic decline in peak tuning across the visual hierarchy. Areas V1–V3 showed peak activity at rapid presentation rates of 18–25 items/s, area V4v peaked at intermediate rates (9 items/s), and the FFA and PPA peaked at the slowest temporal rates (4–5 items/s). Our results reveal a progressive loss in the temporal processing capacity of the human visual system as information is transferred from early visual areas to higher areas. These data suggest that temporal limitations in object recognition likely result from the limited processing capacity of high-level object-selective areas rather than that of earlier stages of visual processing.


2016 ◽  
Vol 27 (1) ◽  
pp. 146-161 ◽  
Author(s):  
Kevin S. Weiner ◽  
Michael A. Barnett ◽  
Simon Lorenz ◽  
Julian Caspers ◽  
Anthony Stigliani ◽  
...  

2014 ◽  
Vol 14 (10) ◽  
pp. 187-187 ◽  
Author(s):  
A. Stigliani ◽  
K. S. Weiner ◽  
K. Grill-Spector

Author(s):  
Lichao Xu ◽  
Szu-Yun Lin ◽  
Andrew W. Hlynka ◽  
Hao Lu ◽  
Vineet R. Kamat ◽  
...  

AbstractThere has been a strong need for simulation environments that are capable of modeling deep interdependencies between complex systems encountered during natural hazards, such as the interactions and coupled effects between civil infrastructure systems response, human behavior, and social policies, for improved community resilience. Coupling such complex components with an integrated simulation requires continuous data exchange between different simulators simulating separate models during the entire simulation process. This can be implemented by means of distributed simulation platforms or data passing tools. In order to provide a systematic reference for simulation tool choice and facilitating the development of compatible distributed simulators for deep interdependent study in the context of natural hazards, this article focuses on generic tools suitable for integration of simulators from different fields but not the platforms that are mainly used in some specific fields. With this aim, the article provides a comprehensive review of the most commonly used generic distributed simulation platforms (Distributed Interactive Simulation (DIS), High Level Architecture (HLA), Test and Training Enabling Architecture (TENA), and Distributed Data Services (DDS)) and data passing tools (Robot Operation System (ROS) and Lightweight Communication and Marshalling (LCM)) and compares their advantages and disadvantages. Three specific limitations in existing platforms are identified from the perspective of natural hazard simulation. For mitigating the identified limitations, two platform design recommendations are provided, namely message exchange wrappers and hybrid communication, to help improve data passing capabilities in existing solutions and provide some guidance for the design of a new domain-specific distributed simulation framework.


2017 ◽  
Vol 8 (1) ◽  
Author(s):  
Ben Deen ◽  
Hilary Richardson ◽  
Daniel D. Dilks ◽  
Atsushi Takahashi ◽  
Boris Keil ◽  
...  

2021 ◽  
Vol 30 (6) ◽  
pp. 526-534
Author(s):  
Evelina Fedorenko ◽  
Cory Shain

Understanding language requires applying cognitive operations (e.g., memory retrieval, prediction, structure building) that are relevant across many cognitive domains to specialized knowledge structures (e.g., a particular language’s lexicon and syntax). Are these computations carried out by domain-general circuits or by circuits that store domain-specific representations? Recent work has characterized the roles in language comprehension of the language network, which is selective for high-level language processing, and the multiple-demand (MD) network, which has been implicated in executive functions and linked to fluid intelligence and thus is a prime candidate for implementing computations that support information processing across domains. The language network responds robustly to diverse aspects of comprehension, but the MD network shows no sensitivity to linguistic variables. We therefore argue that the MD network does not play a core role in language comprehension and that past findings suggesting the contrary are likely due to methodological artifacts. Although future studies may reveal some aspects of language comprehension that require the MD network, evidence to date suggests that those will not be related to core linguistic processes such as lexical access or composition. The finding that the circuits that store linguistic knowledge carry out computations on those representations aligns with general arguments against the separation of memory and computation in the mind and brain.


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2019 ◽  
Vol 19 (10) ◽  
pp. 34a
Author(s):  
Emily Kubota ◽  
Jason D Yeatman

Sign in / Sign up

Export Citation Format

Share Document