scholarly journals Attention to speaking mouth is reduced by face inversion in infants only from 9 months of age

Author(s):  
Zuzanna Laudańska ◽  
Aleksandra Dopierała ◽  
Magdalena Szmytke ◽  
Dianna Ilyka ◽  
Anna Malinowska-Korczak ◽  
...  

Abstract Configural processing is a specialised perceptual mechanism that allows adult humans to quickly process facial information. It emerges before the first birthday and can be disrupted by upside-down presentation of the face (inversion). To date, little is known about the relationship of configural face processing to the emerging knowledge of audiovisual (AV) speech in infancy. Using eye-tracking we measured attention to speaking mouth in upright and inverted faces that were either congruent or incongruent with the speech sound. Face inversion affected looking at AV speech only in older infants (9- to 11- and 12- to 14-month-olds). The youngest group of infants (5- to 7-month-olds) did not show any differences in looking durations between upright and inverted faces, while in both older groups face inversion led to reduced looking at the articulating mouth. We also observed a stronger interest in the eyes in the youngest infants, followed by an increase in looking time to the mouth in both older groups. Our findings suggest that configural face processing is involved in AV speech processing already in infancy, indicating early integration of face and speech processing mechanisms in cognitive development.

2016 ◽  
Vol 8 (2) ◽  
pp. 183-190 ◽  
Author(s):  
Jason C. Deska ◽  
Steven M. Almaraz ◽  
Kurt Hugenberg

Recent research has demonstrated that ascribing minds to humanlike stimuli is a product of both their perceptual similarity to human faces and whether they engaged configural face processing. We present the findings of two experiments in which we both manipulate the amount of humanlike features in faces (in a doll-to-human morph continuum) and manipulate perceivers’ ability to employ configural face processing (via face inversion) while measuring explicit ratings of mind ascription (Study 1) and the spontaneous activation of humanlike concepts (Study 2). In both studies, we find novel evidence that ascribing minds to entities is an interactive product of both having strong perceptual similarity to human faces and being processed using configural processing mechanisms typical of normal face perception. In short, ascribing mind to others is bounded jointly by the featural cues of the target and by processes employed by the perceiver.


2019 ◽  
Author(s):  
Inês Mares ◽  
Louise Ewing ◽  
Emily K. Farran ◽  
Fraser W Smith ◽  
Marie L Smith

AbstractRapidly and accurately processing information from faces is a critical human function that is known to improve with developmental age. Understanding the underlying drivers of this improvement remains a contentious question, with debate continuing as to the presence of early vs. late maturation of face-processing mechanisms. Recent behavioural evidence suggests an important ‘hallmark’ of expert face processing – the face inversion effect – is present in very young children, yet neural support for this remains unclear. To address this, we conducted a detailed investigation of the neural dynamics of face-selective processing in children spanning a range of ages (6 – 11 years) and adults. Uniquely, we applied multivariate pattern analysis (MVPA) to the electroencephalogram signal (EEG) to test for the presence of a distinct neural profile associated with canonical upright faces when compared both to other objects (houses) and to inverted faces. Results revealed robust discrimination profiles, at the individual level, of differentiated neural activity associated with broad face categorization and further with its expert processing, as indexed by the face inversion effect, from the youngest ages tested. This result is consistent with an early functional maturation of broad face processing mechanisms. Yet, clear quantitative differences between the response profile of children and adults is suggestive of age-related refinement of this system with developing face and general expertise. Standard ERP analysis also provides some support for qualitative differences in the neural response to inverted faces in children in contrast to adults. This neural profile is in line with recent behavioural studies that have reported impressively expert early face abilities during childhood, while also providing novel evidence of the ongoing neural specialisation between child and adulthood.


2010 ◽  
Vol 69 (3) ◽  
pp. 161-167 ◽  
Author(s):  
Jisien Yang ◽  
Adrian Schwaninger

Configural processing has been considered the major contributor to the face inversion effect (FIE) in face recognition. However, most researchers have only obtained the FIE with one specific ratio of configural alteration. It remains unclear whether the ratio of configural alteration itself can mediate the occurrence of the FIE. We aimed to clarify this issue by manipulating the configural information parametrically using six different ratios, ranging from 4% to 24%. Participants were asked to judge whether a pair of faces were entirely identical or different. The paired faces that were to be compared were presented either simultaneously (Experiment 1) or sequentially (Experiment 2). Both experiments revealed that the FIE was observed only when the ratio of configural alteration was in the intermediate range. These results indicate that even though the FIE has been frequently adopted as an index to examine the underlying mechanism of face processing, the emergence of the FIE is not robust with any configural alteration but dependent on the ratio of configural alteration.


2005 ◽  
Vol 17 (10) ◽  
pp. 1652-1666 ◽  
Author(s):  
Roberto Caldara ◽  
Philippe Schyns ◽  
Eugéne Mayer ◽  
Marie L. Smith ◽  
Frédéric Gosselin ◽  
...  

One of the most impressive disorders following brain damage to the ventral occipitotemporal cortex is prosopagnosia, or the inability to recognize faces. Although acquired prosopagnosia with preserved general visual and memory functions is rare, several cases have been described in the neuropsychological literature and studied at the functional and neural level over the last decades. Here we tested a brain-damaged patient (PS) presenting a deficit restricted to the category of faces to clarify the nature of the missing and preserved components of the face processing system when it is selectively damaged. Following learning to identify 10 neutral and happy faces through extensive training, we investigated patient PS's recognition of faces using Bubbles, a response classification technique that sampled facial information across the faces in different bandwidths of spatial frequencies [Gosselin, F., & Schyns, P. E., Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261-2271, 2001]. Although PS gradually used less information (i.e., the number of bubbles) to identify faces over testing, the total information required was much larger than for normal controls and decreased less steeply with practice. Most importantly, the facial information used to identify individual faces differed between PS and controls. Specifically, in marked contrast to controls, PS did not use the optimal eye information to identify familiar faces, but instead the lower part of the face, including the mouth and the external contours, as normal observers typically do when processing unfamiliar faces. Together, the findings reported here suggest that damage to the face processing system is characterized by an inability to use the information that is optimal to judge identity, focusing instead on suboptimal information.


2020 ◽  
Author(s):  
Sara Fiscella ◽  
Madeline S Cappelloni ◽  
Ross K Maddox

When listening is difficult, seeing the face of the talker aids speech comprehension. Faces carry both temporal (low-level physical correspondence of mouth movement and auditory speech) and linguistic (learned physical correspondences of mouth shape (viseme) and speech sound (phoneme)) cues. Listeners participated in two experiments investigating how these cues may be used to process sentences when maskers are present. In Experiment I, faces were rotated to disrupt linguistic but not temporal cue correspondence. Listeners suffered a deficit in speech comprehension when the faces were rotated, indicating that visemes are processed in a rotation-dependent manner, and that linguistic cues aid comprehension. In Experiment II, listeners were asked to detect pitch modulation in the target speech with upright and inverted faces that either matched the target or masker speech such that performance differences could be explained by binding, an early multisensory integration mechanism distinct from traditional late integration. Performance in this task replicated previous findings that temporal integration induces binding, but there was no behavioral evidence for a role of linguistic cues in binding. Together these experiments point to temporal cues providing a speech processing benefit through binding and linguistic cues providing a benefit through late integration.


1998 ◽  
Vol 51 (3) ◽  
pp. 449-473 ◽  
Author(s):  
Helmut Leder ◽  
Vicki Bruce

Distinctiveness contributes strongly to the recognition and rejection of faces in memory tasks. In four experiments we examine the role played by local and relational information in the distinctiveness of upright and inverted faces. In all experiments subjects saw one of three versions of a face: original faces, which had been rated as average in distinctiveness in a previous study (Hancock, Burton, & Bruce, 1996), a more distinctive version in which local features had been changed ( D-local), and a more distinctive version in which relational features had been changed ( D-rel). An increase in distinctiveness was found for D-local and D-rel faces in Experiment 1 (complete faces) and 3 and 4 (face internals only) when the faces had to be rated in upright presentation, but the distinctiveness of the D-rel faces was reduced much more than that of the D-local versions when the ratings were given to the faces presented upside-down (Experiments 1 and 3). Recognition performance showed a similar pattern: presented upright, both D-local and D-rel revealed higher performance compared to the originals, but in upside-down presentation the D-local versions showed a much stronger distinctiveness advantage. When only internal features of faces were used (Experiments 3 and 4), the D-rel faces lost their advantage over the Original versions in inverted presentation. The results suggest that at least two dimensions of facial information contribute to a face's apparent distinctiveness, but that these sources of information are differentially affected by turning the face upside-down. These findings are in accordance with a face processing model in which face inversion effects occur because a specific type of information processing is disrupted, rather than because of a general disruption of performance.


Perception ◽  
10.1068/p3012 ◽  
2000 ◽  
Vol 29 (2) ◽  
pp. 159-170 ◽  
Author(s):  
Alejo Freire ◽  
Kang Lee ◽  
Lawrence A Symons

We report four experiments leading to conclusions that: (i) the face-inversion effect is mainly due to the deficits in processing of configural information from inverted faces; and (ii) this effect occurs primarily at the encoding stage of face processing, rather than at the storage stage. In experiment 1, participants discriminated upright faces differing primarily in configuration with 81% accuracy. Participants viewing the same faces presented upside down scored only 55%. In experiment 2, the corresponding discrimination rates for faces differing mainly in featural information were 91% (upright) and 90% (inverted). In experiments 3 and 4, the same faces were used in a memory paradigm. In experiment 3, a delayed matching-to-sample task was used, in which upright-face pairs differed either in configuration or features. Recognition rates were comparable to those for the corresponding upright faces in the discrimination tasks in experiments 1 and 2. However, there was no effect of delay (1 s, 5 s, or 10 s). In experiment 4, we repeated experiment 3, this time with inverted faces. Results were comparable to those of inverted conditions in experiments 1 and 2, and again there was no effect of delay. Together these results suggest that an ‘encoding bottleneck’ for configural information may be responsible for the face-inversion effect in particular, and memory for faces in general.


2019 ◽  
Author(s):  
Yasmin Allen-Davidian ◽  
Manuela Russo ◽  
Naohide Yamamoto ◽  
Jordy Kaufman ◽  
Alan J. Pegna ◽  
...  

Face Inversion Effects (FIEs) – differences in response to upside down faces compared to upright faces – occur for both behavioural and electrophysiological responses when people view face stimuli. In EEG, the inversion of a face is often reported to evoke an enhanced amplitude and delayed latency of the N170 event-related potential. This response has historically been attributed to the indexing of specialised face processing mechanisms within the brain. However, inspection of the literature revealed that while the N170 is consistently delayed to photographed, schematic, Mooney and line drawn face stimuli, only naturally photographed faces enhance the amplitude upon inversion. This raises the possibility that the increased N170 amplitudes to inverted faces may have other origins than the inversion of the face’s structural components. In line with previous research establishing the N170 as a prediction error signal, we hypothesise that the unique N170 amplitude response to inverted photographed faces stems from multiple expectation violations, over and above structural inversion. For instance, rotating an image of a face upside down not only violates the expectation that faces appear upright, but also lifelong priors that illumination comes from above and gravity pulls from below. To test this hypothesis, we recorded EEG whilst participants viewed face stimuli (upright versus inverted), where the faces were illuminated from above versus below, and where the models were photographed upright versus hanging upside down. The N170 amplitudes were found to be modulated by a complex interaction between orientation, lighting and gravity factors, with the amplitudes largest when faces consistently violated all three expectations and smallest when all these factors concurred with expectations. These results confirm our hypothesis that FIEs on N170 amplitudes are driven by a violation of the viewer’s expectations across several parameters that characterise faces, rather than a disruption in the configurational disposition of its features.


2003 ◽  
Vol 15 (3) ◽  
pp. 462-474 ◽  
Author(s):  
Galit Yovel ◽  
Jerre Levy ◽  
Marcia Grabowecky ◽  
Ken A. Paller

Studies in healthy individuals and split-brain patients have shown that the representation of facial information from the left visual field (LVF) is better than the representation of facial information from the right visual field (RVF). To investigate the neurophysiological basis of this LVF superiority in face perception, we recorded event-related potentials (ERPs) to centrally presented face stimuli in which relevant facial information is present bilaterally (B faces) or only in the left (L faces) or the right (R faces) visual field. Behavioral findings showed best performance for B faces and, in line with the LVF superiority, better performance for L than R faces. Evoked potentials to B, L, and R faces at 100 to 150-msec poststimulus showed no evidence of asymmetric transfer of information between the hemispheres at early stages of visual processing, suggesting that this factor is not responsible for the LVF superiority. Neural correlates of the LVF superiority, however, were manifested in a shorter latency of the face-specific N170 component to L than R faces and in a larger amplitude to L than R faces at 220—280 and 400—600 msec over both hemispheres. These ERP amplitude differences between L and R faces covaried across subjects with the extent to which the face-specific N170 component was larger over the right than the left hemisphere. We conclude that the two hemispheres exchange information symmetrically at early stages of face processing and together generate a shared facial representation, which is better when facial information is directly presented to the right hemisphere (RH; L faces) than to the left hemisphere (LH; R faces) and best when both hemispheres receive facial information (B faces).


Sign in / Sign up

Export Citation Format

Share Document