scholarly journals Relative sensitivity to low- vs. high-level visual properties in face-sensitive regions of the human ventral occipito-temporal cortex: evidence from intra-cerebral recordings

2015 ◽  
Vol 15 (12) ◽  
pp. 751
Author(s):  
Joan Liu-Shuang ◽  
Jacques Jonas ◽  
Justin Ales ◽  
Anthony Norcia ◽  
Louis Maillard ◽  
...  
Neuron ◽  
2020 ◽  
Author(s):  
Amarender R. Bogadhi ◽  
Leor N. Katz ◽  
Anil Bollimunta ◽  
David A. Leopold ◽  
Richard J. Krauzlis

2019 ◽  
Author(s):  
Amarender R. Bogadhi ◽  
Leor N. Katz ◽  
Anil Bollimunta ◽  
David A. Leopold ◽  
Richard J. Krauzlis

AbstractThe evolution of the primate brain is marked by a dramatic increase in the number of neocortical areas that process visual information 1. This cortical expansion supports two hallmarks of high-level primate vision – the ability to selectively attend to particular visual features 2 and the ability to recognize a seemingly limitless number of complex visual objects 3. Given their prominent roles in high-level vision for primates, it is commonly assumed that these cortical processes supersede the earlier versions of these functions accomplished by the evolutionarily older brain structures that lie beneath the cortex. Contrary to this view, here we show that the superior colliculus (SC), a midbrain structure conserved across all vertebrates 4, is necessary for the normal expression of attention-related modulation and object selectivity in a newly identified region of macaque temporal cortex. Using a combination of psychophysics, causal perturbations and fMRI, we identified a localized region in the temporal cortex that is functionally dependent on the SC. Targeted electrophysiological recordings in this cortical region revealed neurons with strong attention-related modulation that was markedly reduced during attention deficits caused by SC inactivation. Many of these neurons also exhibited selectivity for particular visual objects, and this selectivity was also reduced during SC inactivation. Thus, the SC exerts a causal influence on high-level visual processing in cortex at a surprisingly late stage where attention and object selectivity converge, perhaps determined by the elemental forms of perceptual processing the SC has supported since before there was a neocortex.


2017 ◽  
Author(s):  
Susan G Wardle ◽  
Kiley Seymour ◽  
Jessica Taubert

AbstractThe neural mechanisms underlying face and object recognition are understood to originate in ventral occipital-temporal cortex. A key feature of the functional architecture of the visual ventral pathway is its category-selectivity, yet it is unclear how category-selective regions process ambiguous visual input which violates category boundaries. One example is the spontaneous misperception of faces in inanimate objects such as the Man in the Moon, in which an object belongs to more than one category and face perception is divorced from its usual diagnostic visual features. We used fMRI to investigate the representation of illusory faces in category-selective regions. The perception of illusory faces was decodable from activation patterns in the fusiform face area (FFA) and lateral occipital complex (LOC), but not from other visual areas. Further, activity in FFA was strongly modulated by the perception of illusory faces, such that even objects with vastly different visual features were represented similarly if all images contained an illusory face. The results show that the FFA is broadly-tuned for face detection, not finely-tuned to the homogenous visual properties that typically distinguish faces from other objects. A complete understanding of high-level vision will require explanation of the mechanisms underlying natural errors of face detection.


2019 ◽  
Vol 30 (3) ◽  
pp. 942-951 ◽  
Author(s):  
Lanfang Liu ◽  
Yuxuan Zhang ◽  
Qi Zhou ◽  
Douglas D Garrett ◽  
Chunming Lu ◽  
...  

Abstract Whether auditory processing of speech relies on reference to the articulatory motor information of speaker remains elusive. Here, we addressed this issue under a two-brain framework. Functional magnetic resonance imaging was applied to record the brain activities of speakers when telling real-life stories and later of listeners when listening to the audio recordings of these stories. Based on between-brain seed-to-voxel correlation analyses, we revealed that neural dynamics in listeners’ auditory temporal cortex are temporally coupled with the dynamics in the speaker’s larynx/phonation area. Moreover, the coupling response in listener’s left auditory temporal cortex follows the hierarchical organization for speech processing, with response lags in A1+, STG/STS, and MTG increasing linearly. Further, listeners showing greater coupling responses understand the speech better. When comprehension fails, such interbrain auditory-articulation coupling vanishes substantially. These findings suggest that a listener’s auditory system and a speaker’s articulatory system are inherently aligned during naturalistic verbal interaction, and such alignment is associated with high-level information transfer from the speaker to the listener. Our study provides reliable evidence supporting that references to the articulatory motor information of speaker facilitate speech comprehension under a naturalistic scene.


2016 ◽  
Vol 28 (5) ◽  
pp. 680-692 ◽  
Author(s):  
Daria Proklova ◽  
Daniel Kaiser ◽  
Marius V. Peelen

Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate- and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs (e.g., snake–rope). Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns (neural dissimilarity) was predicted by the objects' pairwise visual dissimilarity and/or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate–inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system.


2018 ◽  
Author(s):  
Anthony Stigliani ◽  
Brianna Jeska ◽  
Kalanit Grill-Spector

ABSTRACTHow do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-tomoment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions respond to both sustained and transient components of the visual input. Responses to sustained stimuli exhibit adaptation, whereas responses to transient stimuli are surprisingly larger for stimulus offsets than onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations.AUTHOR SUMMARYHow does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging (fMRI) and a temporal encoding model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input. Regions in lateral temporal cortex process visual transients associated with stimulus onsets and offsets but not the unchanging aspects of the visual input. That is, they compute moment-to-moment changes in the visual input. In contrast, regions in ventral temporal cortex process both stable and transient components, with the former exhibiting adaptation. Surprisingly, in these ventral regions responses to stimulus offsets were larger than onsets. We suggest that the former may reflect a memory trace of the stimulus, when it is no longer visible, and the latter may reflect rapid processing of new items at stimulus onset. Together, these findings (i) reveal a fundamental temporal mechanism that distinguishes visual streams and (ii) highlight both the importance and utility of modeling brain responses with millisecond precision to understand the temporal dynamics of neural computations in the human brain.


2019 ◽  
Author(s):  
Giovanni M. Di Liberto ◽  
Claire Pelofi ◽  
Roberta Bianco ◽  
Prachi Patel ◽  
Ashesh D. Mehta ◽  
...  

SummaryHumans engagement in music rests on underlying elements such as the listeners’ cultural background and general interest in music, all shaping the way music is processed in the brain and perceived. Crucially, these factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing of music. Here we recorded electroencephalographic and electrocorticographic brain responses as participants listened to Bach melodies. We assessed the relative contributions of the acoustic versus melodic components of the music to the neural signal. Acoustic features included envelope and its derivative. Melodic features included information on melodic progressions (pitch) and their tempo (onsets), which were extracted from a Markov model predicting the next note based on a corpus of Western music and the preceding proximal musical context. We related the music to brain activity with a linear temporal response function, and demonstrated that cortical responses to music encode melodic expectations. Specifically, individual-subject neural signals were better predicted by a combination of acoustic and melodic expectation features than by either alone. This effect was most pronounced at response latencies up to 350ms, and in both planum temporale and Heschl’s gyrus. Finally, expectations of pitch and onset-time of musical notes exerted independent cortical effects, and such influences were modulated by the listeners’ musical expertise. Overall, this study demonstrates how the interplay of experimental and theoretical approaches can yield novel insights into the cortical encoding of melodic expectations.


PeerJ ◽  
2016 ◽  
Vol 4 ◽  
pp. e1565 ◽  
Author(s):  
Pieter Moors ◽  
Johan Wagemans ◽  
Lee de-Wit

The extent to which perceptually suppressed face stimuli are still processed has been extensively studied using the continuous flash suppression paradigm (CFS). Studies that rely on breaking CFS (b-CFS), in which the time it takes for an initially suppressed stimulus to become detectable is measured, have provided evidence for relatively complex processing of invisible face stimuli. In contrast, adaptation and neuroimaging studies have shown that perceptually suppressed faces are only processed for a limited set of features, such as its general shape. In this study, we asked whether perceptually suppressed face stimuli presented in their commonly experienced configuration would break suppression faster than when presented in an uncommonly experienced configuration. This study was motivated by a recent neuroimaging study showing that commonly experienced face configurations are more strongly represented in the fusiform face area. Our findings revealed that faces presented in commonly experienced configurations indeed broke suppression faster, yet this effect did not interact with face inversion suggesting that, in a b-CFS context, perceptually suppressed faces are potentially not processed by specialized (high-level) face processing mechanisms. Rather, our pattern of results is consistent with an interpretation based on the processing of more basic visual properties such as convexity.


2020 ◽  
Author(s):  
Karola Schlegelmilch ◽  
Annie E. Wertz

Visual processing of a natural environment occurs quickly and effortlessly. Yet, little is known about how young children are able to visually categorize naturalistic structures, since their perceptual abilities are still developing. We addressed this question by asking 76 children (age: 4.1-6.1 years) and 72 adults (age: 18-50 years) to first sort cards with greyscale images depicting vegetation, manmade artifacts, and non-living natural elements (e.g., stones) into groups according to visual similarity. Then, they were asked to choose the images' superordinate categories. We analyzed the relevance of different visual properties to the decisions of the participant groups. Children were very well able to interpret complex visual structures. However, children relied on fewer visual properties and, in general, were less likely to include properties which afforded the analysis of detailed visual information in their categorization decisions than adults, suggesting that immaturities of the still-developing visual system affected categorization. Moreover, when sorting according to visual similarity, both groups attended to the images' assumed superordinate categories—in particular to vegetation—in addition to visual properties. Children had a higher relative sensitivity for vegetation than adults did in the classification task when controlling for overall performance differences. Taken together, these findings add to the sparse literature on the role of developing perceptual abilities in processing naturalistic visual input.


Sign in / Sign up

Export Citation Format

Share Document