face inversion effect
Recently Published Documents


TOTAL DOCUMENTS

113
(FIVE YEARS 26)

H-INDEX

22
(FIVE YEARS 2)

2022 ◽  
Vol 12 ◽  
Author(s):  
Yuki Tsuji ◽  
So Kanazawa ◽  
Masami K. Yamaguchi

Pupil contagion is the phenomenon in which an observer’s pupil-diameter changes in response to another person’s pupil. Even chimpanzees and infants in early development stages show pupil contagion. This study investigated whether dynamic changes in pupil diameter would induce changes in infants’ pupil diameter. We also investigated pupil contagion in the context of different faces. We measured the pupil-diameter of 50 five- to six-month-old infants in response to changes in the pupil diameter (dilating/constricting) of upright and inverted faces. The results showed that (1) in the upright presentation condition, dilating the pupil diameter induced a change in the infants’ pupil diameter while constricting the pupil diameter did not induce a change, and (2) pupil contagion occurred only in the upright face presentation, and not in the inverted face presentation. These results indicate the face-inversion effect in infants’ pupil contagion.


2021 ◽  
Author(s):  
Alexandra Krugliak ◽  
Alex Clarke

AbstractOur visual environment impacts multiple aspects of cognition including perception, attention and memory, yet most studies traditionally remove or control the external environment. As a result, we have a limited understanding of neurocognitive processes beyond the controlled lab environment. Here, we aim to study neural processes in real-world environments, while also maintaining a degree of control over perception. To achieve this, we combined mobile EEG (mEEG) and augmented reality (AR), which allows us to place virtual objects into the real world. We validated this AR and mEEG approach using a well-characterised cognitive response - the face inversion effect. Participants viewed upright and inverted faces in three EEG tasks (1) a lab-based computer task, (2) walking through an indoor environment while seeing face photographs, and (3) walking through an indoor environment while seeing virtual faces. We find greater low frequency EEG activity for inverted compared to upright faces in all experimental tasks, demonstrating that cognitively relevant signals can be extracted from mEEG and AR paradigms. This was established in both an epoch-based analysis aligned to face events, and a GLM-based approach that incorporates continuous EEG signals and face perception states. Together, this research helps pave the way to exploring neurocognitive processes in real-world environments while maintaining experimental control using AR.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Allie R. Geiger ◽  
Benjamin Balas

AbstractFace recognition is supported by selective neural mechanisms that are sensitive to various aspects of facial appearance. These include event-related potential (ERP) components like the P100 and the N170 which exhibit different patterns of selectivity for various aspects of facial appearance. Examining the boundary between faces and non-faces using these responses is one way to develop a more robust understanding of the representation of faces in extrastriate cortex and determine what critical properties an image must possess to be considered face-like. Robot faces are a particularly interesting stimulus class to examine because they can differ markedly from human faces in terms of shape, surface properties, and the configuration of facial features, but are also interpreted as social agents in a range of settings. In the current study, we thus chose to investigate how ERP responses to robot faces may differ from the response to human faces and non-face objects. In two experiments, we examined how the P100 and N170 responded to human faces, robot faces, and non-face objects (clocks). In Experiment 1, we found that robot faces elicit intermediate responses from face-sensitive components relative to non-face objects (clocks) and both real human faces and artificial human faces (computer-generated faces and dolls). These results suggest that while human-like inanimate faces (CG faces and dolls) are processed much like real faces, robot faces are dissimilar enough to human faces to be processed differently. In Experiment 2 we found that the face inversion effect was only partly evident in robot faces. We conclude that robot faces are an intermediate stimulus class that offers insight into the perceptual and cognitive factors that affect how social agents are identified and categorized.


Author(s):  
Sarah Schroeder ◽  
Kurtis Goad ◽  
Nicole Rothner ◽  
Ali Momen ◽  
Eva Wiese

People process human faces configurally—as a Gestalt or integrated whole—but perceive objects in terms of their individual features. As a result, faces—but not objects—are more difficult to process when presented upside down versus upright. Previous research demonstrates that this inversion effect is not observed when recognizing previously seen android faces, suggesting they are processed more like objects, perhaps due to a lack of perceptual experience and/or motivation to recognize android faces. The current study aimed to determine whether negative emotions, particularly fear of androids, may lessen configural processing of android faces compared to human faces. While the current study replicated previous research showing a greater inversion effect for human compared to android faces, we did not find evidence that negative emotions—such as fear—towards androids influenced the face inversion effect. We discuss the implications of this study and opportunities for future research.


2021 ◽  
Vol Volume 17 ◽  
pp. 1893-1906
Author(s):  
Yi Liu ◽  
Taiyong Bi ◽  
Qijie Kuang ◽  
Bei Zhang ◽  
Huawang Wu ◽  
...  

2021 ◽  
Author(s):  
Sam S. Rakover ◽  
Rani A. Bar-On ◽  
Anna Gliklich

Abstract A major interest of research in face recognition lies in explaining the Face Inversion Effect (FIE), in which the recognition of an inverted face is less successful than that of an upright face. However, prior research has devoted little effort to examining how the cognitive system handles comparison between upright and inverted faces. In two experiments, such comparison is found to be based on visual similarity rather than on mental rotation of the inverted face to upright. Visual similarity is based on certain elements mutual to the two faces, which resist the transformation of inversion. These elements are symmetrical or salient components of the face, such as round eyes or thick lips.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ciro Civile ◽  
Samantha Quaglia ◽  
Emika Waguri ◽  
Maddy Ward ◽  
Rossy McLaren ◽  
...  

AbstractWe believe we are now in a position to answer the question, "Are faces special?" inasmuch as this applies to the face inversion effect (better performance for upright vs inverted faces). Using a double-blind, between-subject design, in two experiments (n = 96) we applied a specific tDCS procedure targeting the Fp3 area while participants performed a matching-task with faces (Experiment 1a) or checkerboards from a familiar prototype-defined category (Experiment 1b). Anodal tDCS eliminated the checkerboard inversion effect reliably obtained in the sham group, but only reduced it for faces (although the reduction was significant). Thus, there is a component to the face inversion effect that we are not affecting with a tDCS procedure that can eliminate the checkerboard inversion effect. We suggest that the reduction reflects the loss of an expertise-based component in the face inversion effect, and the residual is due to a face-specific component of that effect.


2021 ◽  
Author(s):  
Andreja Stajduhar ◽  
Tzvi Ganel ◽  
Galia Avidan ◽  
R. Shayna Rosenbaum ◽  
Erez Freud

Face perception is considered a remarkable visual ability in humans, which is subject to a prolonged developmental trajectory. In response to the COVID-19 pandemic, mask-wearing has become mandatory for adults and children alike. However, previous research indicates its adverse effects on face recognition abilities in adults. The current study sought to explore the effect of masks on face processing abilities in school-age children given that face perception is not fully developed in this population. To this end, children (n = 72, ages 6-14 years old) completed the Cambridge Face Memory Test – Kids (CFMT-K), a validated measure of face perception performance. Faces were presented with or without masks and across two orientations (upright/inverted). The inclusion of face masks led to a profound deficit in face perception abilities. This decrement was more pronounced in children compared to adults, despite adjustment of task difficulty across the two age groups. Additionally, children exhibited reliable correlations between age and the CFMT score for upright faces for both the mask and no-mask conditions. Finally, as previously observed in adults, children also showed qualitative changes in the processing of masked faces. Specifically, holistic processing, a hallmark of face perception, was disrupted for masked faces, as suggested by a reduced face-inversion effect. Together, these findings provide evidence for substantial quantitative and qualitative alterations in the processing of masked faces in school-age children.


2021 ◽  
Vol 33 (2) ◽  
pp. 303-314
Author(s):  
Yasmin Allen-Davidian ◽  
Manuela Russo ◽  
Naohide Yamamoto ◽  
Jordy Kaufman ◽  
Alan J. Pegna ◽  
...  

Face inversion effects occur for both behavioral and electrophysiological responses when people view faces. In EEG, inverted faces are often reported to evoke an enhanced amplitude and delayed latency of the N170 ERP. This response has been attributed to the indexing of specialized face processing mechanisms within the brain. However, inspection of the literature revealed that, although N170 is consistently delayed to a variety of face representations, only photographed faces invoke enhanced N170 amplitudes upon inversion. This suggests that the increased N170 amplitudes to inverted faces may have other origins than the inversion of the face's structure. We hypothesize that the unique N170 amplitude response to inverted photographed faces stems from multiple expectation violations, over and above structural inversion. For instance, rotating an image of a face upside–down not only violates the expectation that faces appear upright but also lifelong priors about illumination and gravity. We recorded EEG while participants viewed face stimuli (upright vs. inverted), where the faces were illuminated from above versus below, and where the models were photographed upright versus hanging upside–down. The N170 amplitudes were found to be modulated by a complex interaction between orientation, lighting, and gravity factors, with the amplitudes largest when faces consistently violated all three expectations. These results confirm our hypothesis that face inversion effects on N170 amplitudes are driven by a violation of the viewer's expectations across several parameters that characterize faces, rather than a disruption in the configurational disposition of its features.


Sign in / Sign up

Export Citation Format

Share Document