face stimuli
Recently Published Documents


TOTAL DOCUMENTS

221
(FIVE YEARS 74)

H-INDEX

30
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Sebastian Korb ◽  
Nace Mikus ◽  
Claudia Massaccesi ◽  
Jack Grey ◽  
Suvarnalata Xanthate Duggirala ◽  
...  

Appraisals can be influenced by cultural beliefs and stereotypes. In line with this, past research has shown that judgments about the emotional expression of a face are influenced by the face’s sex, and vice versa that judgments about the sex of a person somewhat depend on the person’s facial expression. For example, participants associate anger with male faces, and female faces with happiness or sadness. However, the strength and the bidirectionality of these effects remain debated. Moreover, the interplay of a stimulus’ emotion and sex remains mostly unknown in the auditory domain. To investigate these questions, we created a novel stimulus set of 121 avatar faces and 121 human voices (available at https://bit.ly/2JkXrpy) with matched, fine-scale changes along the emotional (happy to angry) and sexual (male to female) dimensions. In a first experiment (N=76), we found clear evidence for the mutual influence of facial emotion and sex cues on ratings, and moreover for larger implicit (task-irrelevant) effects of stimulus’ emotion than of sex. These findings were replicated and extended in two preregistered studies – one laboratory categorisation study using the same face stimuli (N=108; https://osf.io/ve9an), and one online study with vocalisations (N=72; https://osf.io/vhc9g). Overall, results show that the associations of maleness-anger and femaleness-happiness exist across sensory modalities, and suggest that emotions expressed in the face and voice cannot be entirely disregarded, even when attention is mainly focused on determining stimulus’ sex. We discuss the relevance of these findings for cognitive and neural models of face and voice processing.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Runnan Cao ◽  
Xin Li ◽  
Nicholas J. Brandmeir ◽  
Shuo Wang

AbstractFaces are salient social stimuli that attract a stereotypical pattern of eye movement. The human amygdala and hippocampus are involved in various aspects of face processing; however, it remains unclear how they encode the content of fixations when viewing faces. To answer this question, we employed single-neuron recordings with simultaneous eye tracking when participants viewed natural face stimuli. We found a class of neurons in the human amygdala and hippocampus that encoded salient facial features such as the eyes and mouth. With a control experiment using non-face stimuli, we further showed that feature selectivity was specific to faces. We also found another population of neurons that differentiated saccades to the eyes vs. the mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played different roles in encoding facial features. Lastly, we revealed two functional roles of feature-selective neurons: 1) they encoded the salient region for face recognition, and 2) they were related to perceived social trait judgments. Together, our results link eye movement with neural face processing and provide important mechanistic insights for human face perception.


2021 ◽  
Author(s):  
Jianxin Wang ◽  
Craig Poskanzer ◽  
Stefano Anzellotti

Facial expressions are critical in our daily interactions. Studying how humans recognize dynamic facial expressions is an important area of research in social perception, but advancements are hampered by the difficulty of creating well-controlled stimuli. Research on the perception of static faces has made significant progress thanks to techniques that make it possible to generate synthetic face stimuli. However, synthetic dynamic expressions are more difficult to generate; methods that yield realistic dynamics typically rely on the use of infrared markers applied on the face, making it expensive to create datasets that include large numbers of different expressions. In addition, the use of markers might interfere with facial dynamics. In this paper, we contribute a new method to generate large amounts of realistic and well-controlled facial expression videos. We use a deep convolutional neural network with attention and asymmetric loss to extract the dynamics of action units from videos, and demonstrate that this approach outperforms a baseline model based on convolutional neural networks without attention on the same stimuli. Next, we develop a pipeline to use the action unit dynamics to render realistic synthetic videos. This pipeline makes it possible to generate large scale naturalistic and controllable facial expression datasets to facilitate future research in social cognitive science.


2021 ◽  
Author(s):  
◽  
Gates Henderson

<p>Face perception depends on a network of brain areas that selectively respond to faces over non-face stimuli. These face-selective areas are involved in different aspects of face perception, but what specific process is implemented in a particular region remains little understood. A candidate processisholistic face processing, namely the integration of visual information across the whole of an upright face. In this thesis, I report two experimentsthat examine whether the occipital face area (OFA), a face-selective region in the inferior occipital gyrus, performs holistic processing for categorising a stimulus as a face. Both experiments were conducted using online, repetitive transcranial magnetic stimulation (TMS) to disrupt activity in the brain while participants performed face perception tasks. Experiment 1 was a localiser in which participants completed two face identification tasks while receiving TMS at OFA or vertex. Participants’ accuracy decreased for one of the tasks as a result of OFA but not vertex stimulation. This result confirms that OFA could be localised and its activity disrupted. Experiment 2 was a test of holistic processing in which participants categorised ambiguous two-tone images as faces or non-faces while TMS was delivered to OFA or vertex. Participants’ accuracy and response times were unchanged as a result of either stimulation. This result suggests that the OFA is not engaged in holistic processing for categorising a stimulus as a face. Overall, the currentresults are more consistent with previous studies suggesting that OFA is involved in processing of local face features/details rather than the whole face.</p>


2021 ◽  
Author(s):  
◽  
Gates Henderson

<p>Face perception depends on a network of brain areas that selectively respond to faces over non-face stimuli. These face-selective areas are involved in different aspects of face perception, but what specific process is implemented in a particular region remains little understood. A candidate processisholistic face processing, namely the integration of visual information across the whole of an upright face. In this thesis, I report two experimentsthat examine whether the occipital face area (OFA), a face-selective region in the inferior occipital gyrus, performs holistic processing for categorising a stimulus as a face. Both experiments were conducted using online, repetitive transcranial magnetic stimulation (TMS) to disrupt activity in the brain while participants performed face perception tasks. Experiment 1 was a localiser in which participants completed two face identification tasks while receiving TMS at OFA or vertex. Participants’ accuracy decreased for one of the tasks as a result of OFA but not vertex stimulation. This result confirms that OFA could be localised and its activity disrupted. Experiment 2 was a test of holistic processing in which participants categorised ambiguous two-tone images as faces or non-faces while TMS was delivered to OFA or vertex. Participants’ accuracy and response times were unchanged as a result of either stimulation. This result suggests that the OFA is not engaged in holistic processing for categorising a stimulus as a face. Overall, the currentresults are more consistent with previous studies suggesting that OFA is involved in processing of local face features/details rather than the whole face.</p>


2021 ◽  
Author(s):  
Eveline Mu ◽  
David P Crewther ◽  
Laila Elaine Hugrass

Visual processing differences in the magnocellular pathway have been reported across the autistic spectrum. On the basis that the firing of primate Type IV magnocellular cells is suppressed by diffuse red backgrounds, several groups have used red backgrounds as a means to investigate magnocellular contributions to visual processing in humans. Here, we measured emotional identification accuracy, and compared the P100 and N170 responses from groups with low (n=21; AQ<11) and high (n=22; AQ>22) Autism Spectrum Quotient (AQ) scores, in response to low (LSF) and high (HSF) spatially filtered fearful and neutral face stimuli presented on red and green backgrounds. For the LSF stimuli, the low AQ group correctly identified fearful expressions more often when presented on a red compared to a green background. The low AQ group also showed red backgrounds reduced the effect of LSF fearful expressions on P100 amplitudes. In contrast, the high AQ group showed that background colour did not significantly alter P100 responses to LSF stimuli. Interestingly, red background reduced the effects of HSF stimuli for the high AQ group. The effects of background color on LSF and HSF facial emotion responses were not evident for the N170 component. Our findings suggest that presenting face stimuli on a red background alters both magnocellular and parvocellular contributions to the P100 waveform, and that these effects differ for groups with low and high autistic tendencies. In addition, a theoretical model for explaining the temporal differences in facial emotion processing for low and high AQ groups is proposed.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Johanna Brustkern ◽  
Markus Heinrichs ◽  
Mirella Walker ◽  
Bastian Schiller

AbstractTrust is essential in initiating social relationships. Due to the differential evolution of sex hormones as well as the fitness burdens of producing offspring, evaluations of a potential mating partner’s trustworthiness likely differ across sexes. Here, we explore unknown sex-specific effects of facial attractiveness and threat on trusting other-sex individuals. Ninety-three participants (singles; 46 women) attracted by the other sex performed an incentivized trust game. They had to decide whether to trust individuals of the other sex represented by a priori-created face stimuli gradually varying in the intensities of both attractiveness and threat. Male and female participants trusted attractive and unthreatening-looking individuals more often. However, whereas male participants’ trust behavior was affected equally by attractiveness and threat, female participants’ trust behavior was more strongly affected by threat than by attractiveness. This indicates that a partner’s high facial attractiveness might compensate for high facial threat in male but not female participants. Our findings suggest that men and women prioritize attractiveness and threat differentially, with women paying relatively more attention to threat cues inversely signaling parental investment than to attractiveness cues signaling reproductive fitness. This difference might be attributable to an evolutionary, biologically sex-specific decision regarding parental investment and reproduction behavior.


Author(s):  
Amy Dawel ◽  
Elizabeth J. Miller ◽  
Annabel Horsburgh ◽  
Patrice Ford

i-Perception ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 204166952110563
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Recognizing familiar faces requires a comparison of the incoming perceptual information with mental face representations stored in memory. Mounting evidence indicates that these representations adapt quickly to recently perceived facial changes. This becomes apparent in face adaptation studies where exposure to a strongly manipulated face alters the perception of subsequent face stimuli: original, non-manipulated face images then appear to be manipulated, while images similar to the adaptor are perceived as “normal.” The face adaptation paradigm serves as a good tool for investigating the information stored in facial memory. So far, most of the face adaptation studies focused on configural (second-order relationship) face information, mainly neglecting non-configural face information (i.e., that does not affect spatial face relations), such as color, although several (non-adaptation) studies were able to demonstrate the importance of color information in face perception and identification. The present study therefore focuses on adaptation effects on saturation color information and compares the results with previous findings on brightness. The study reveals differences in the effect pattern and robustness, indicating that adaptation effects vary considerably even within the same class of non-configural face information.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Yuri Kawaguchi ◽  
Koyo Nakamura ◽  
Masaki Tomonaga ◽  
Ikuma Adachi

Impaired face recognition for certain face categories, such as faces of other species or other age class faces, is known in both humans and non-human primates. A previous study found that it is more difficult for chimpanzees to differentiate infant faces than adult faces. Infant faces of chimpanzees differ from adult faces in shape and colour, but the latter is especially a salient cue for chimpanzees. Therefore, impaired face differentiation of infant faces may be due to a specific colour. In the present study, we investigated which feature of infant faces has a greater effect on face identification difficulty. Adult chimpanzees were tested using a matching-to-sample task with four types of face stimuli whose shape and colour were manipulated as either infant or adult one independently. Chimpanzees' discrimination performance decreased as they matched faces with infant coloration, regardless of the shape. This study is the first to demonstrate the impairment effect of infantile coloration on face recognition in non-human primates, suggesting that the face recognition strategies of humans and chimpanzees overlap as both species show proficient face recognition for certain face colours.


Sign in / Sign up

Export Citation Format

Share Document