scholarly journals Functional selectivity for face processing in the temporal voice area of early deaf individuals

2017 ◽  
Vol 114 (31) ◽  
pp. E6437-E6446 ◽  
Author(s):  
Stefania Benetti ◽  
Markus J. van Ackeren ◽  
Giuseppe Rabini ◽  
Joshua Zonca ◽  
Valentina Foa ◽  
...  

Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.

2017 ◽  
Author(s):  
Stefania Benetti ◽  
Markus J. van Ackeren ◽  
Giuseppe Rabini ◽  
Joshua Zonca ◽  
Valentina Foa ◽  
...  

AbstractBrain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly following typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born deaf people. Our results support the idea that cross-modal plasticity in case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.


2019 ◽  
Vol 31 (10) ◽  
pp. 1573-1588 ◽  
Author(s):  
Eelke de Vries ◽  
Daniel Baldauf

We recorded magnetoencephalography using a neural entrainment paradigm with compound face stimuli that allowed for entraining the processing of various parts of a face (eyes, mouth) as well as changes in facial identity. Our magnetic response image-guided magnetoencephalography analyses revealed that different subnodes of the human face processing network were entrained differentially according to their functional specialization. Whereas the occipital face area was most responsive to the rate at which face parts (e.g., the mouth) changed, and face patches in the STS were mostly entrained by rhythmic changes in the eye region, the fusiform face area was the only subregion that was strongly entrained by the rhythmic changes in facial identity. Furthermore, top–down attention to the mouth, eyes, or identity of the face selectively modulated the neural processing in the respective area (i.e., occipital face area, STS, or fusiform face area), resembling behavioral cue validity effects observed in the participants' RT and detection rate data. Our results show the attentional weighting of the visual processing of different aspects and dimensions of a single face object, at various stages of the involved visual processing hierarchy.


2018 ◽  
Vol 30 (7) ◽  
pp. 963-972 ◽  
Author(s):  
Andrew D. Engell ◽  
Na Yeon Kim ◽  
Gregory McCarthy

Perception of faces has been shown to engage a domain-specific set of brain regions, including the occipital face area (OFA) and the fusiform face area (FFA). It is commonly held that the OFA is responsible for the detection of faces in the environment, whereas the FFA is responsible for processing the identity of the face. However, an alternative model posits that the FFA is responsible for face detection and subsequently recruits the OFA to analyze the face parts in the service of identification. An essential prediction of the former model is that the OFA is not sensitive to the arrangement of internal face parts. In the current fMRI study, we test the sensitivity of the OFA and FFA to the configuration of face parts. Participants were shown faces in which the internal parts were presented in a typical configuration (two eyes above a nose above a mouth) or in an atypical configuration (the locations of individual parts were shuffled within the face outline). Perception of the atypical faces evoked a significantly larger response than typical faces in the OFA and in a wide swath of the surrounding posterior occipitotemporal cortices. Surprisingly, typical faces did not evoke a significantly larger response than atypical faces anywhere in the brain, including the FFA (although some subthreshold differences were observed). We propose that face processing in the FFA results in inhibitory sculpting of activation in the OFA, which accounts for this region's weaker response to typical than to atypical configurations.


2007 ◽  
Vol 19 (11) ◽  
pp. 1815-1826 ◽  
Author(s):  
Roxane J. Itier ◽  
Claude Alain ◽  
Katherine Sedore ◽  
Anthony R. McIntosh

Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.


2021 ◽  
Vol 15 ◽  
Author(s):  
Zhongliang Yin ◽  
Yue Wang ◽  
Minghao Dong ◽  
Shenghan Ren ◽  
Haihong Hu ◽  
...  

Face processing is a spatiotemporal dynamic process involving widely distributed and closely connected brain regions. Although previous studies have examined the topological differences in brain networks between face and non-face processing, the time-varying patterns at different processing stages have not been fully characterized. In this study, dynamic brain networks were used to explore the mechanism of face processing in human brain. We constructed a set of brain networks based on consecutive short EEG segments recorded during face and non-face (ketch) processing respectively, and analyzed the topological characteristic of these brain networks by graph theory. We found that the topological differences of the backbone of original brain networks (the minimum spanning tree, MST) between face and ketch processing changed dynamically. Specifically, during face processing, the MST was more line-like over alpha band in 0–100 ms time window after stimuli onset, and more star-like over theta and alpha bands in 100–200 and 200–300 ms time windows. The results indicated that the brain network was more efficient for information transfer and exchange during face processing compared with non-face processing. In the MST, the nodes with significant differences of betweenness centrality and degree were mainly located in the left frontal area and ventral visual pathway, which were involved in the face-related regions. In addition, the special MST patterns can discriminate between face and ketch processing by an accuracy of 93.39%. Our results suggested that special MST structures of dynamic brain networks reflected the potential mechanism of face processing in human brain.


2020 ◽  
Author(s):  
Antonio Maffei ◽  
Paola Sessa

AbstractFace perception arises from a collective activation of brain regions in the occipital, parietal and temporal cortices. Despite wide acknowledgement that these regions act in an intertwined network, the network behavior itself is poorly understood. Here we present a study in which time-varying connectivity estimated from EEG activity elicited by facial expressions presentation was characterized using graph-theoretical measures of node centrality and global network topology. Results revealed that face perception results from a dynamic reshaping of the network architecture, characterized by the emergence of hubs located in the occipital and temporal regions of the scalp. The importance of these nodes can be observed from early stages of visual processing and reaches a climax in the same time-window in which the face-sensitive N170 is observed. Furthermore, using Granger causality, we found that the time-evolving centrality of these nodes is associated with ERP amplitude, providing a direct link between the network state and local neural response. Additionally, investigating global network topology by means of small-worldness and modularity, we found that face processing requires a functional network with a strong small-world organization that maximizes integration, at the cost of segregated subdivisions. Interestingly, we found that this architecture is not static, but instead it is implemented by the network from stimulus onset to ~200 msec. Altogether, this study reveals the event-related changes underlying face processing at the network level, suggesting that a distributed processing mechanism operates through dynamically weighting the contribution of the cortical regions involved.Data AvailabilityData and code related to this manuscript can be accessed through the OSF at this link https://osf.io/hc3sk/?view_only=af52bc4295c044ffbbd3be019cc083f4


2021 ◽  
Vol 11 (9) ◽  
pp. 1195
Author(s):  
Rosalind Hutchings ◽  
Romina Palermo ◽  
Jessica L. Hazelton ◽  
Olivier Piguet ◽  
Fiona Kumfor

Face processing relies on a network of occipito-temporal and frontal brain regions. Temporal regions are heavily involved in looking at and processing emotional faces; however, the contribution of each hemisphere to this process remains under debate. Semantic dementia (SD) is a rare neurodegenerative brain condition characterized by anterior temporal lobe atrophy, which is either predominantly left- (left-SD) or right-lateralised (right-SD). This syndrome therefore provides a unique lesion model to understand the role of laterality in emotional face processing. Here, we investigated facial scanning patterns in 10 left-SD and 6 right-SD patients, compared to 22 healthy controls. Eye tracking was recorded via a remote EyeLink 1000 system, while participants passively viewed fearful, happy, and neutral faces over 72 trials. Analyses revealed that right-SD patients had more fixations to the eyes than controls in the Fear (p = 0.04) condition only. Right-SD patients also showed more fixations to the eyes than left-SD patients in all conditions: Fear (p = 0.01), Happy (p = 0.008), and Neutral (p = 0.04). In contrast, no differences between controls and left-SD patients were observed for any emotion. No group differences were observed for fixations to the mouth, or the whole face. This study is the first to examine patterns of facial scanning in left- versus right- SD, demonstrating more of a focus on the eyes in right-SD. Neuroimaging analyses showed that degradation of the right superior temporal sulcus was associated with increased fixations to the eyes. Together these results suggest that right lateralised brain regions of the face processing network are involved in the ability to efficiently utilise changeable cues from the face.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2014 ◽  
Vol 28 (3) ◽  
pp. 148-161 ◽  
Author(s):  
David Friedman ◽  
Ray Johnson

A cardinal feature of aging is a decline in episodic memory (EM). Nevertheless, there is evidence that some older adults may be able to “compensate” for failures in recollection-based processing by recruiting brain regions and cognitive processes not normally recruited by the young. We review the evidence suggesting that age-related declines in EM performance and recollection-related brain activity (left-parietal EM effect; LPEM) are due to altered processing at encoding. We describe results from our laboratory on differences in encoding- and retrieval-related activity between young and older adults. We then show that, relative to the young, in older adults brain activity at encoding is reduced over a brain region believed to be crucial for successful semantic elaboration in a 400–1,400-ms interval (left inferior prefrontal cortex, LIPFC; Johnson, Nessler, & Friedman, 2013 ; Nessler, Friedman, Johnson, & Bersick, 2007 ; Nessler, Johnson, Bersick, & Friedman, 2006 ). This reduced brain activity is associated with diminished subsequent recognition-memory performance and the LPEM at retrieval. We provide evidence for this premise by demonstrating that disrupting encoding-related processes during this 400–1,400-ms interval in young adults affords causal support for the hypothesis that the reduction over LIPFC during encoding produces the hallmarks of an age-related EM deficit: normal semantic retrieval at encoding, reduced subsequent episodic recognition accuracy, free recall, and the LPEM. Finally, we show that the reduced LPEM in young adults is associated with “additional” brain activity over similar brain areas as those activated when older adults show deficient retrieval. Hence, rather than supporting the compensation hypothesis, these data are more consistent with the scaffolding hypothesis, in which the recruitment of additional cognitive processes is an adaptive response across the life span in the face of momentary increases in task demand due to poorly-encoded episodic memories.


Sign in / Sign up

Export Citation Format

Share Document