What Evidence Supports Special Processing for Faces? A Cautionary Tale for fMRI Interpretation

2013 ◽  
Vol 25 (11) ◽  
pp. 1777-1793 ◽  
Author(s):  
Rosemary A. Cowell ◽  
Garrison W. Cottrell

We trained a neurocomputational model on six categories of photographic images that were used in a previous fMRI study of object and face processing. Multivariate pattern analyses of the activations elicited in the object-encoding layer of the model yielded results consistent with two previous, contradictory fMRI studies. Findings from one of the studies [Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293, 2425–2430, 2001] were interpreted as evidence for the object-form topography model. Findings from the other study [Spiridon, M., & Kanwisher, N. How distributed is visual category information in human occipito-temporal cortex? An fMRI study. Neuron, 35, 1157–1165, 2002] were interpreted as evidence for neural processing mechanisms in the fusiform face area that are specialized for faces. Because the model contains no special processing mechanism or specialized architecture for faces and yet it can reproduce the fMRI findings used to support the claim that there are specialized face-processing neurons, we argue that these fMRI results do not actually support that claim. Results from our neurocomputational model therefore constitute a cautionary tale for the interpretation of fMRI data.

2015 ◽  
Author(s):  
Daniel D Dilks ◽  
Peter Cook ◽  
Samuel K Weiller ◽  
Helen P Berns ◽  
Mark H Spivak ◽  
...  

Recent behavioral evidence suggests that dogs, like humans and monkeys, are capable of visual face recognition. But do dogs also exhibit specialized cortical face regions similar to humans and monkeys? Using functional magnetic resonance imaging (fMRI) in six dogs trained to remain motionless during scanning without restraint or sedation, we found a region in the canine temporal lobe that responded significantly more to movies of human faces than to movies of everyday objects. Next, using a new stimulus set to investigate face selectivity in this predefined candidate dog face area, we found that this region responded similarly to images of human faces and dog faces, yet significantly more to both human and dog faces than to images of objects. Such face selectivity was not found in dog primary visual cortex. Taken together, these findings: 1) provide the first evidence for a face-selective region in the temporal cortex of dogs, which cannot be explained by simple low-level visual feature extraction; 2) reveal that neural machinery dedicated to face processing is not unique to primates; and 3) may help explain dogs’ exquisite sensitivity to human social cues.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Timothy T Rogers ◽  
Christopher R Cox ◽  
Qihong Lu ◽  
Akihiro Shimotake ◽  
Takayuki Kikuch ◽  
...  

How does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: information about the animacy of a depicted stimulus is distributed across ventral temporal cortex in a dynamic code possessing feature-like elements posteriorly but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal ‘hub’ in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.


2018 ◽  
Vol 30 (7) ◽  
pp. 963-972 ◽  
Author(s):  
Andrew D. Engell ◽  
Na Yeon Kim ◽  
Gregory McCarthy

Perception of faces has been shown to engage a domain-specific set of brain regions, including the occipital face area (OFA) and the fusiform face area (FFA). It is commonly held that the OFA is responsible for the detection of faces in the environment, whereas the FFA is responsible for processing the identity of the face. However, an alternative model posits that the FFA is responsible for face detection and subsequently recruits the OFA to analyze the face parts in the service of identification. An essential prediction of the former model is that the OFA is not sensitive to the arrangement of internal face parts. In the current fMRI study, we test the sensitivity of the OFA and FFA to the configuration of face parts. Participants were shown faces in which the internal parts were presented in a typical configuration (two eyes above a nose above a mouth) or in an atypical configuration (the locations of individual parts were shuffled within the face outline). Perception of the atypical faces evoked a significantly larger response than typical faces in the OFA and in a wide swath of the surrounding posterior occipitotemporal cortices. Surprisingly, typical faces did not evoke a significantly larger response than atypical faces anywhere in the brain, including the FFA (although some subthreshold differences were observed). We propose that face processing in the FFA results in inhibitory sculpting of activation in the OFA, which accounts for this region's weaker response to typical than to atypical configurations.


2020 ◽  
Author(s):  
D. Proklova ◽  
M.A. Goodale

AbstractAnimate and inanimate objects elicit distinct response patterns in the human ventral temporal cortex (VTC), but the exact features driving this distinction are still poorly understood. One prominent feature that distinguishes typical animals from inanimate objects and that could potentially explain the animate-inanimate distinction in the VTC is the presence of a face. In the current fMRI study, we investigated this possibility by creating a stimulus set that included animals with faces, faceless animals, and inanimate objects, carefully matched in order to minimize other visual differences. We used both searchlight-based and ROI-based representational similarity analysis (RSA) to test whether the presence of a face explains the animate-inanimate distinction in the VTC. The searchlight analysis revealed that when animals with faces were removed from the analysis, the animate-inanimate distinction almost disappeared. The ROI-based RSA revealed a similar pattern of results, but also showed that, even in the absence of faces, information about agency (a combination of animal’s ability to move and think) is present in parts of the VTC that are sensitive to animacy. Together, these analyses showed that animals with faces do elicit a stronger animate/inanimate response in the VTC, but that this effect is driven not by faces per se, or the visual features of faces, but by other factors that correlate with face presence, such as the capacity for self-movement and thought. In short, the VTC appears to treat the face as a proxy for agency, a ubiquitous feature of familiar animals.Significance StatementMany studies have shown that images of animals are processed differently from inanimate objects in the human brain, particularly in the ventral temporal cortex (VTC). However, what features drive this distinction remains unclear. One important feature that distinguishes many animals from inanimate objects is a face. Here, we used fMRI to test whether the animate/inanimate distinction is driven by the presence of faces. We found that the presence of faces did indeed boost activity related to animacy in the VTC. A more detailed analysis, however, revealed that it was the association between faces and other attributes such as the capacity for self-movement and thinking, not the faces per se, that was driving the activity we observed.


2020 ◽  
Author(s):  
Iris I A Groen ◽  
Edward H Silson ◽  
David Pitcher ◽  
Chris I Baker

AbstractHuman visual cortex contains three scene-selective regions in the lateral, medial and ventral cortex, termed the occipital place area (OPA), medial place area (MPA) and parahippocampal place area (PPA). Using functional magnetic resonance imaging (fMRI), all three regions respond more strongly when viewing visual scenes compared with isolated objects or faces. To determine how these regions are functionally and causally connected, we applied transcranial magnetic stimulation to OPA and measured fMRI responses before and after stimulation, using a theta-burst paradigm (TBS). To test for stimulus category-selectivity, we presented a range of visual categories (scenes, buildings, objects, faces). To test for specificity of any effects to TBS of OPA we employed two control conditions: Sham, with no TBS stimulation, and an active TBS-control with TBS to a proximal face-selective cortical region (occipital face area, or OFA). We predicted that TBS to OPA (but not OFA) would lead to decreased responses to scenes and buildings (but not other categories) in other scene-selective cortical regions. Across both ROI and whole-volume analyses, we observed decreased responses to scenes in PPA as a result of TBS. However, these effects were neither category specific, with decreased responses to all stimulus categories, nor limited to scene-selective regions, with decreases also observed in face-selective fusiform face area (FFA). Furthermore, similar effects were observed with TBS to OFA, thus effects were not specific to the stimulation site in the lateral occipital cortex. Whilst these data are suggestive of a causal, but non-specific relationship between lateral occipital and ventral temporal cortex, we discuss several factors that could have underpinned this result, such as the differences between TBS and online TMS, the role of anatomical distance between stimulated regions and how TMS effects are operationalised. Furthermore, our findings highlight the importance of active control conditions in brain stimulation experiments to accurately assess functional and causal connectivity between specific brain regions.


2000 ◽  
Vol 12 (supplement 2) ◽  
pp. 35-51 ◽  
Author(s):  
Alumit Ishai ◽  
Leslie G. Ungerleider ◽  
Alex Martin ◽  
James V. Haxby

Recently, we identified, using fMRI, three bilateral regions in the ventral temporal cortex that responded preferentially to faces, houses, and chairs [Ishai, A., Ungerleider, L. G., Martin, A., Schouten, J. L., & Haxby, J. Y. (1999). Distributed representation of objects in the human ventral visual pathway. Proceedings of the National Academy of Sciences, U.S.A., 96, 9379-9384]. Here, we report differential patterns of activation, similar to those seen in the ventral temporal cortex, in bilateral regions of the ventral occipital cortex. We also found category-related responses in the dorsal occipital cortex and in the superior temporal sulcus. Moreover, rather than activating discrete, segregated areas, each category was associated with its own differential pattern of response across a broad expanse of cortex. The distributed patterns of response were similar across tasks (passive viewing, delayed matching) and presentation formats (photographs, line drawings). We propose that the representation of objects in the ventral visual pathway, including both occipital and temporal regions, is not restricted to small, highly selective patches of cortex but, instead, is a distributed representation of information about object form. Within this distributed system, the representation of faces appears to be less extensive as compared to the representations of nonface objects.


2019 ◽  
Author(s):  
Franziska E. Hildesheim ◽  
Isabell Debus ◽  
Roman Kessler ◽  
Ina Thome ◽  
Kristin M. Zimmermann ◽  
...  

ABSTRACTFace processing is mediated by a distributed neural network commonly divided into a “core system” and an “extended system”. The core system consists of several, typically right-lateralized brain regions in the occipito-temporal cortex, including the occipital face area (OFA), the fusiform face area (FFA) and the posterior superior temporal sulcus (pSTS). It was recently proposed that the face processing network is initially bilateral and becomes right-specialized in the course of the development of reading abilities due to the competition between language-related regions in the left occipito-temporal cortex (e.g., the visual word form area) and the FFA for common neural resources.The goal of the present pilot study was to prepare the basis for a larger follow-up study assessing the ontogenetic development of the lateralization of the face processing network. More specifically, we aimed on the one hand to establish a functional magnetic resonance imaging (fMRI) paradigm suitable for assessing activation in the core system of face processing in young children at the single subject level, and on the other hand to calculate the necessary group size for the planned follow-up study.Twelve children aged 7-9 years, and ten adults were measured with a face localizer task that was specifically adapted for children. Our results showed that it is possible to localize the core system’s brain regions in children even at the single subject level. We further found a (albeit non-significant) trend for increased right-hemispheric lateralization of all three regions in adults compared to children, with the largest effect for the FFA (estimated effect size d=0.78, indicating medium to large effects). Using these results as basis for an informed power analysis, we estimated that an adequately powered (sensitivity 0.8) follow-up study testing developmental changes of FFA lateralization would require the inclusion of 18 children and 26 adults.


2015 ◽  
Author(s):  
Daniel D Dilks ◽  
Peter Cook ◽  
Samuel K Weiller ◽  
Helen P Berns ◽  
Mark H Spivak ◽  
...  

Recent behavioral evidence suggests that dogs, like humans and monkeys, are capable of visual face recognition. But do dogs also exhibit specialized cortical face regions similar to humans and monkeys? Using functional magnetic resonance imaging (fMRI) in six dogs trained to remain motionless during scanning without restraint or sedation, we found a region in the canine temporal lobe that responded significantly more to movies of human faces than to movies of everyday objects. Next, using a new stimulus set to investigate face selectivity in this predefined candidate dog face area, we found that this region responded similarly to images of human faces and dog faces, yet significantly more to both human and dog faces than to images of objects. Such face selectivity was not found in dog primary visual cortex. Taken together, these findings: 1) provide the first evidence for a face-selective region in the temporal cortex of dogs, which cannot be explained by simple low-level visual feature extraction; 2) reveal that neural machinery dedicated to face processing is not unique to primates; and 3) may help explain dogs’ exquisite sensitivity to human social cues.


2019 ◽  
Author(s):  
Timothy T. Rogers ◽  
Christopher Cox ◽  
Qihong Lu ◽  
Akihiro Shimotake ◽  
Takayuki Kikuch ◽  
...  

AbstractHow does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: semantic information is distributed across ventral temporal cortex in a dynamic code that possesses stable feature-like elements in posterior regions but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal “hub” in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.


Sign in / Sign up

Export Citation Format

Share Document