scholarly journals Encoding model of temporal processing in human visual cortex

2017 ◽  
Vol 114 (51) ◽  
pp. E11047-E11056 ◽  
Author(s):  
Anthony Stigliani ◽  
Brianna Jeska ◽  
Kalanit Grill-Spector

How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain.

2017 ◽  
Author(s):  
Anthony Stigliani ◽  
Brianna Jeska ◽  
Kalanit Grill-Spector

ABSTRACTHow is temporal information processed in human visual cortex? There is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli, then implemented a novel 2 temporal-channel encoding model to estimate the contributions of each channel. The model predicts cortical responses to time-varying stimuli from milliseconds to seconds and reveals that (i) lateral occipito-temporal regions and peripheral early visual cortex are dominated by transient responses, and (ii) ventral occipito-temporal regions and central early visual cortex are not only driven by both channels, but that transient responses exceed the sustained. These findings resolve an outstanding debate and elucidate temporal processing in human visual cortex. Importantly, this approach has vast implications because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yun Lin ◽  
Xi Zhou ◽  
Yuji Naya ◽  
Justin L. Gardner ◽  
Pei Sun

The linearity of BOLD responses is a fundamental presumption in most analysis procedures for BOLD fMRI studies. Previous studies have examined the linearity of BOLD signal increments, but less is known about the linearity of BOLD signal decrements. The present study assessed the linearity of both BOLD signal increments and decrements in the human primary visual cortex using a contrast adaptation paradigm. Results showed that both BOLD signal increments and decrements kept linearity to long stimuli (e.g., 3 s, 6 s), yet, deviated from linearity to transient stimuli (e.g., 1 s). Furthermore, a voxel-wise analysis showed that the deviation patterns were different for BOLD signal increments and decrements: while the BOLD signal increments demonstrated a consistent overestimation pattern, the patterns for BOLD signal decrements varied from overestimation to underestimation. Our results suggested that corrections to deviations from linearity of transient responses should consider the different effects of BOLD signal increments and decrements.


2018 ◽  
Author(s):  
Anthony Stigliani ◽  
Brianna Jeska ◽  
Kalanit Grill-Spector

ABSTRACTHow do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-tomoment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions respond to both sustained and transient components of the visual input. Responses to sustained stimuli exhibit adaptation, whereas responses to transient stimuli are surprisingly larger for stimulus offsets than onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations.AUTHOR SUMMARYHow does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging (fMRI) and a temporal encoding model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input. Regions in lateral temporal cortex process visual transients associated with stimulus onsets and offsets but not the unchanging aspects of the visual input. That is, they compute moment-to-moment changes in the visual input. In contrast, regions in ventral temporal cortex process both stable and transient components, with the former exhibiting adaptation. Surprisingly, in these ventral regions responses to stimulus offsets were larger than onsets. We suggest that the former may reflect a memory trace of the stimulus, when it is no longer visible, and the latter may reflect rapid processing of new items at stimulus onset. Together, these findings (i) reveal a fundamental temporal mechanism that distinguishes visual streams and (ii) highlight both the importance and utility of modeling brain responses with millisecond precision to understand the temporal dynamics of neural computations in the human brain.


2017 ◽  
Vol 118 (6) ◽  
pp. 3194-3214 ◽  
Author(s):  
Rosemary A. Cowell ◽  
Krystal R. Leger ◽  
John T. Serences

Identifying an object and distinguishing it from similar items depends upon the ability to perceive its component parts as conjoined into a cohesive whole, but the brain mechanisms underlying this ability remain elusive. The ventral visual processing pathway in primates is organized hierarchically: Neuronal responses in early stages are sensitive to the manipulation of simple visual features, whereas neuronal responses in subsequent stages are tuned to increasingly complex stimulus attributes. It is widely assumed that feature-coding dominates in early visual cortex whereas later visual regions employ conjunction-coding in which object representations are different from the sum of their simple feature parts. However, no study in humans has demonstrated that putative object-level codes in higher visual cortex cannot be accounted for by feature-coding and that putative feature codes in regions prior to ventral temporal cortex are not equally well characterized as object-level codes. Thus the existence of a transition from feature- to conjunction-coding in human visual cortex remains unconfirmed, and if a transition does occur its location remains unknown. By employing multivariate analysis of functional imaging data, we measure both feature-coding and conjunction-coding directly, using the same set of visual stimuli, and pit them against each other to reveal the relative dominance of one vs. the other throughout cortex. Our results reveal a transition from feature-coding in early visual cortex to conjunction-coding in both inferior temporal and posterior parietal cortices. This novel method enables the use of experimentally controlled stimulus features to investigate population-level feature and conjunction codes throughout human cortex. NEW & NOTEWORTHY We use a novel analysis of neuroimaging data to assess representations throughout visual cortex, revealing a transition from feature-coding to conjunction-coding along both ventral and dorsal pathways. Occipital cortex contains more information about spatial frequency and contour than about conjunctions of those features, whereas inferotemporal and parietal cortices contain conjunction coding sites in which there is more information about the whole stimulus than its component parts.


Author(s):  
Daphne Maurer ◽  
Terri L. Lewis

Patterned visual input during early infancy plays a key role in constructing and/or preserving the neural architecture that will be used later for both low-level basic vision and higher-level visual decoding. The high-contrast, low spatial frequencies that newborns can extract from their environment set up the system for later development of fine acuity, expert face processing, and specialization of the visual cortex for visual processing. Nevertheless, considerable plasticity remains in adulthood for rescuing the system from earlier damage.


2018 ◽  
Author(s):  
Jesse Gomez ◽  
Zonglei Zhen ◽  
Kevin Weiner

Human visual cortex is organized with striking consistency across individuals. While recent findings demonstrate an unexpected coupling between functional and cytoarchitectonic regions relative to the folding of human visual cortex, a unifying principle linking these anatomical and functional features of cortex remains elusive. To fill this gap in knowledge, we combined independent and ground truth measurements of human cytoarchitectonic regions and genetic tissue characterization within the visual processing hierarchy. Using a data-driven approach, we examined if differential gene expression among cortical areas could explain the organization of the visual processing hierarchy into early, middle, and late processing stages. This approach revealed that the visual processing hierarchy is explained by two opposing gene expression gradients: one that contains a series of genes with expression magnitudes that ascend from the first processing stage (e.g. area hOc1, or V1) to the last processing stage (e.g. area FG4) and another that contains a separate series of genes that show a descending gradient. In the living human brain, each of these gradients correlates strongly with anatomical variations along the visual hierarchy such as the thickness or myelination of cortex. We further reveal that these genetic gradients emerge along unique trajectories in human development: the ascending gradient is present at 10- 12 gestational weeks, while the descendent gradient emerges later (19-24 gestational weeks). Interestingly, it is not until early childhood (before 5 years of age) that the two expression gradients achieve their adult-like mean expression values. Finally, additional analyses in non-human primates (NHP) reveal the surprising finding that only the ascending, but not the descending, expression gradient is evolutionarily conserved. These findings create one of the first models bridging macroscopic features of human cytoarchitectonic areas in visual cortex with microscopic features of cellular organization and genetic expression, revealing that the hierarchy of human visual cortex, its cortical folding, and the cytoarchitecture underlying its computations, can be described by a sparse subset (~200) of genes, roughly one-third of which are not shared with NHP. These findings help pinpoint the genes contributing to both healthy cortical development and the cortical biology distinguishing humans from other primates, establishing essential groundwork for understanding future work linking genetic mutations with the function and development of the human brain.


2021 ◽  
Author(s):  
Matthijs N. oude Lohuis ◽  
Alexis Cerván Cantón ◽  
Cyriel M. A. Pennartz ◽  
Umberto Olcese

SummaryOver the past few years, the various areas that surround the primary visual cortex in the mouse have been associated with many functions, ranging from higher-order visual processing to decision making. Recently, some studies have shown that higher-order visual areas influence the activity of the primary visual cortex, refining its processing capabilities. Here we studied how in vivo optogenetic inactivation of two higher-order visual areas with different functional properties affects responses evoked by moving bars in the primary visual cortex. In contrast with the prevailing view, our results demonstrate that distinct higher-order visual areas similarly modulate early visual processing. In particular, these areas broaden stimulus responsiveness in the primary visual cortex, by amplifying sensory-evoked responses for stimuli not moving along the orientation preferred by individual neurons. Thus, feedback from higher-order visual areas amplifies V1 responses to non-preferred stimuli, which may aid their detection.


Sign in / Sign up

Export Citation Format

Share Document