Sensitivity to Faces with Typical and Atypical Part Configurations within Regions of the Face-processing Network: An fMRI Study

2018 ◽  
Vol 30 (7) ◽  
pp. 963-972 ◽  
Author(s):  
Andrew D. Engell ◽  
Na Yeon Kim ◽  
Gregory McCarthy

Perception of faces has been shown to engage a domain-specific set of brain regions, including the occipital face area (OFA) and the fusiform face area (FFA). It is commonly held that the OFA is responsible for the detection of faces in the environment, whereas the FFA is responsible for processing the identity of the face. However, an alternative model posits that the FFA is responsible for face detection and subsequently recruits the OFA to analyze the face parts in the service of identification. An essential prediction of the former model is that the OFA is not sensitive to the arrangement of internal face parts. In the current fMRI study, we test the sensitivity of the OFA and FFA to the configuration of face parts. Participants were shown faces in which the internal parts were presented in a typical configuration (two eyes above a nose above a mouth) or in an atypical configuration (the locations of individual parts were shuffled within the face outline). Perception of the atypical faces evoked a significantly larger response than typical faces in the OFA and in a wide swath of the surrounding posterior occipitotemporal cortices. Surprisingly, typical faces did not evoke a significantly larger response than atypical faces anywhere in the brain, including the FFA (although some subthreshold differences were observed). We propose that face processing in the FFA results in inhibitory sculpting of activation in the OFA, which accounts for this region's weaker response to typical than to atypical configurations.

2019 ◽  
Vol 31 (10) ◽  
pp. 1573-1588 ◽  
Author(s):  
Eelke de Vries ◽  
Daniel Baldauf

We recorded magnetoencephalography using a neural entrainment paradigm with compound face stimuli that allowed for entraining the processing of various parts of a face (eyes, mouth) as well as changes in facial identity. Our magnetic response image-guided magnetoencephalography analyses revealed that different subnodes of the human face processing network were entrained differentially according to their functional specialization. Whereas the occipital face area was most responsive to the rate at which face parts (e.g., the mouth) changed, and face patches in the STS were mostly entrained by rhythmic changes in the eye region, the fusiform face area was the only subregion that was strongly entrained by the rhythmic changes in facial identity. Furthermore, top–down attention to the mouth, eyes, or identity of the face selectively modulated the neural processing in the respective area (i.e., occipital face area, STS, or fusiform face area), resembling behavioral cue validity effects observed in the participants' RT and detection rate data. Our results show the attentional weighting of the visual processing of different aspects and dimensions of a single face object, at various stages of the involved visual processing hierarchy.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
N. Apurva Ratan Murty ◽  
Pouya Bashivan ◽  
Alex Abate ◽  
James J. DiCarlo ◽  
Nancy Kanwisher

AbstractCortical regions apparently selective to faces, places, and bodies have provided important evidence for domain-specific theories of human cognition, development, and evolution. But claims of category selectivity are not quantitatively precise and remain vulnerable to empirical refutation. Here we develop artificial neural network-based encoding models that accurately predict the response to novel images in the fusiform face area, parahippocampal place area, and extrastriate body area, outperforming descriptive models and experts. We use these models to subject claims of category selectivity to strong tests, by screening for and synthesizing images predicted to produce high responses. We find that these high-response-predicted images are all unambiguous members of the hypothesized preferred category for each region. These results provide accurate, image-computable encoding models of each category-selective region, strengthen evidence for domain specificity in the brain, and point the way for future research characterizing the functional organization of the brain with unprecedented computational precision.


2020 ◽  
Vol 30 (11) ◽  
pp. 6051-6068 ◽  
Author(s):  
Adolfo M García ◽  
Eugenia Hesse ◽  
Agustina Birba ◽  
Federico Adolfi ◽  
Ezequiel Mikulan ◽  
...  

Abstract In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (~100 ms) activity boosts within the face-processing network, alongside later (~275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding ~80% of items before 200 ms, while classification based on multimodal-network activity only surpassed ~70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0–200 ms) than later (200–400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.


2010 ◽  
Vol 22 (1) ◽  
pp. 203-211 ◽  
Author(s):  
Jia Liu ◽  
Alison Harris ◽  
Nancy Kanwisher

fMRI studies have reported three regions in human ventral visual cortex that respond selectively to faces: the occipital face area (OFA), the fusiform face area (FFA), and a face-selective region in the superior temporal sulcus (fSTS). Here, we asked whether these areas respond to two first-order aspects of the face argued to be important for face perception, face parts (eyes, nose, and mouth), and the T-shaped spatial configuration of these parts. Specifically, we measured the magnitude of response in these areas to stimuli that (i) either contained real face parts, or did not, and (ii) either had veridical face configurations, or did not. The OFA and the fSTS were sensitive only to the presence of real face parts, not to the correct configuration of those parts, whereas the FFA was sensitive to both face parts and face configuration. Further, only in the FFA was the response to configuration and part information correlated across voxels, suggesting that the FFA contains a unified representation that includes both kinds of information. In combination with prior results from fMRI, TMS, MEG, and patient studies, our data illuminate the functional division of labor in the OFA, FFA, and fSTS.


2018 ◽  
Vol 119 (6) ◽  
pp. 2256-2264 ◽  
Author(s):  
Zarrar Shehzad ◽  
Gregory McCarthy

Whether category information is discretely localized or represented widely in the brain remains a contentious issue. Initial functional MRI studies supported the localizationist perspective that category information is represented in discrete brain regions. More recent fMRI studies using machine learning pattern classification techniques provide evidence for widespread distributed representations. However, these latter studies have not typically accounted for shared information. Here, we find strong support for distributed representations when brain regions are considered separately. However, localized representations are revealed by using analytical methods that separate unique from shared information among brain regions. The distributed nature of shared information and the localized nature of unique information suggest that brain connectivity may encourage spreading of information but category-specific computations are carried out in distinct domain-specific regions. NEW & NOTEWORTHY Whether visual category information is localized in unique domain-specific brain regions or distributed in many domain-general brain regions is hotly contested. We resolve this debate by using multivariate analyses to parse functional MRI signals from different brain regions into unique and shared variance. Our findings support elements of both models and show information is initially localized and then shared among other regions leading to distributed representations being observed.


2019 ◽  
Vol 30 (5) ◽  
pp. 2986-2996
Author(s):  
Xue Tian ◽  
Ruosi Wang ◽  
Yuanfang Zhao ◽  
Zonglei Zhen ◽  
Yiying Song ◽  
...  

Abstract Previous studies have shown that individuals with developmental prosopagnosia (DP) show specific deficits in face processing. However, the mechanism underlying the deficits remains largely unknown. One hypothesis suggests that DP shares the same mechanism as normal population, though their faces processing is disproportionally impaired. An alternative hypothesis emphasizes a qualitatively different mechanism of DP processing faces. To test these hypotheses, we instructed DP and normal individuals to perceive faces and objects. Instead of calculating accuracy averaging across stimulus items, we used the discrimination accuracy for each item to construct a multi-item discriminability pattern. We found DP’s discriminability pattern was less similar to that of normal individuals when perceiving faces than perceiving objects, suggesting that DP has qualitatively different mechanism in representing faces. A functional magnetic resonance imaging study was conducted to reveal the neural basis and found that multi-voxel activation patterns for faces in the right fusiform face area and occipital face area of DP were deviated away from the mean activation pattern of normal individuals. Further, the face representation was more heterogeneous in DP, suggesting that deficits of DP may come from multiple sources. In short, our study provides the first direct evidence that DP processes faces qualitatively different from normal population.


Neuron ◽  
2011 ◽  
Vol 70 (2) ◽  
pp. 352-362 ◽  
Author(s):  
Shih-Pi Ku ◽  
Andreas S. Tolias ◽  
Nikos K. Logothetis ◽  
Jozien Goense

2021 ◽  
Vol 14 ◽  
Author(s):  
Dongya Wu ◽  
Xin Li ◽  
Jun Feng

Brain connectivity plays an important role in determining the brain region’s function. Previous researchers proposed that the brain region’s function is characterized by that region’s input and output connectivity profiles. Following this proposal, numerous studies have investigated the relationship between connectivity and function. However, this proposal only utilizes direct connectivity profiles and thus is deficient in explaining individual differences in the brain region’s function. To overcome this problem, we proposed that a brain region’s function is characterized by that region’s multi-hops connectivity profile. To test this proposal, we used multi-hops functional connectivity to predict the individual face activation of the right fusiform face area (rFFA) via a multi-layer graph neural network and showed that the prediction performance is essentially improved. Results also indicated that the two-layer graph neural network is the best in characterizing rFFA’s face activation and revealed a hierarchical network for the face processing of rFFA.


2016 ◽  
Author(s):  
J. Swaroop Guntupalli ◽  
Kelsey G. Wheeler ◽  
M. Ida Gobbini

AbstractNeural models of a distributed system for face perception implicate a network of regions in the ventral visual stream for recognition of identity. Here, we report an fMRI neural decoding study in humans that shows that this pathway culminates in a right inferior frontal cortex face area (rIFFA) with a representation of individual identities that has been disentangled from variable visual features in different images of the same person. At earlier stages in the pathway, processing begins in early visual cortex and the occipital face area (OFA) with representations of head view that are invariant across identities, and proceeds to an intermediate level of representation in the fusiform face area (FFA) in which identity is emerging but still entangled with head view. Three-dimensional, view-invariant representation of identities in the rIFFA may be the critical link to the extended system for face perception, affording activation of person knowledge and emotional responses to familiar faces.Significance StatementIn this fMRI decoding experiment, we address how face images are processed in successive stages to disentangle the view-invariant representation of identity from variable visual features. Representations in early visual cortex and the occipital face area distinguish head views, invariant across identities. An intermediate level of representation in the fusiform face area distinguishes identities but still is entangled with head view. The face-processing pathway culminates in the right inferior frontal area with representation of view-independent identity. This paper clarifies the homologies between the human and macaque face processing systems. The findings show further, however, the importance of the inferior frontal cortex in decoding face identity, a result that has not yet been reported in the monkey literature.


Sign in / Sign up

Export Citation Format

Share Document