facial cues
Recently Published Documents


TOTAL DOCUMENTS

236
(FIVE YEARS 74)

H-INDEX

30
(FIVE YEARS 3)

Author(s):  
Marina L. Butovskaya ◽  
Anna Mezentseva ◽  
Audax Mabulla ◽  
Todd K. Shackelford ◽  
Katrin Schaefer ◽  
...  

2021 ◽  
Author(s):  
Mark Paul ◽  
Sarah Gaither ◽  
William Darity

People’s social class, and the perceptions of their social class are embedded in an institutional context that has important ramifications for one’s life opportunities and outcomes. Research on first impressions has found that people are relatively accurate at judging a variety of traits such as perceived sexual orientation and income, but there is a paucity of research that investigates whether people are also accurate at judging wealth or class. In this article, we first investigate whether people understand the distinction between income and wealth (Study 1). Then, using a novel dataset, we examine whether people are accurate at identifying the income and wealth levels of individuals across racial and ethnic groups by facial cues alone (Study 2). We find that participants understand the meaning of income, but not wealth. Additionally, we find that perceivers categorize class more accurately than by sheer chance, using minimal facial cues, but perceivers are particularly inaccurate when categorizing high-income and high-wealth Black and Latinx subjects.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Andry Chowanda

AbstractSocial interactions are important for us, humans, as social creatures. Emotions play an important part in social interactions. They usually express meanings along with the spoken utterances to the interlocutors. Automatic facial expressions recognition is one technique to automatically capture, recognise, and understand emotions from the interlocutor. Many techniques proposed to increase the accuracy of emotions recognition from facial cues. Architecture such as convolutional neural networks demonstrates promising results for emotions recognition. However, most of the current models of convolutional neural networks require an enormous computational power to train and process emotional recognition. This research aims to build compact networks with depthwise separable layers while also maintaining performance. Three datasets and three other similar architectures were used to be compared with the proposed architecture. The results show that the proposed architecture performed the best among the other architectures. It achieved up to 13% better accuracy and 6–71% smaller and more compact than the other architectures. The best testing accuracy achieved by the architecture was 99.4%.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258470
Author(s):  
Elyssa M. Barrick ◽  
Mark A. Thornton ◽  
Diana I. Tamir

Faces are one of the key ways that we obtain social information about others. They allow people to identify individuals, understand conversational cues, and make judgements about others’ mental states. When the COVID-19 pandemic hit the United States, widespread mask-wearing practices were implemented, causing a shift in the way Americans typically interact. This introduction of masks into social exchanges posed a potential challenge—how would people make these important inferences about others when a large source of information was no longer available? We conducted two studies that investigated the impact of mask exposure on emotion perception. In particular, we measured how participants used facial landmarks (visual cues) and the expressed valence and arousal (affective cues), to make similarity judgements about pairs of emotion faces. Study 1 found that in August 2020, participants with higher levels of mask exposure used cues from the eyes to a greater extent when judging emotion similarity than participants with less mask exposure. Study 2 measured participants’ emotion perception in both April and September 2020 –before and after widespread mask adoption—in the same group of participants to examine changes in the use of facial cues over time. Results revealed an overall increase in the use of visual cues from April to September. Further, as mask exposure increased, people with the most social interaction showed the largest increase in the use of visual facial cues. These results provide evidence that a shift has occurred in how people process faces such that the more people are interacting with others that are wearing masks, the more they have learned to focus on visual cues from the eye area of the face.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Philip Furley ◽  
Florian Klingner ◽  
Daniel Memmert

AbstractThe present research attempted to extend prior research that showed that thin-slices of pre-performance nonverbal behavior (NVB) of professional darts players gives valid information to observers about subsequent performance tendencies. Specifically, we investigated what kind of nonverbal cues were associated with success and informed thin-slice ratings. Participants (N = 61) were first asked to estimate the performance of a random sample of videos showing the preparatory NVB of professional darts players (N = 47) either performing well (470 clips) or poorly (470 clips). Preparatory NVB was assessed via preparation times and Active Appearance Modeling using Noldus FaceReader. Results showed that observers could distinguish between good and poor performance based on thin-slices of preparatory NVB (p = 0.001, d = 0.87). Further analyses showed that facial expressions prior to poor performance showed more arousal (p = 0.011, ƞ2p = 0.10), sadness (p = 0.040, ƞ2p = 0.04), and anxiety (p = 0.009, ƞ2p = 0.09) and preparation times were shorter (p = 0.001, ƞ2p = 0.36) prior to poor performance than good performance. Lens model analyses showed preparation times (p = 0.001, rho = 0.18), neutral (p = 0.001, rho = 0.13), sad (rho = 0.12), and facial expressions of arousal (p = 0.001, rho = 0.11) to be correlated with observers’ performance ratings. Hence, preparation times and facial cues associated with a player’s level of arousal, neutrality, and sadness seem to be valid nonverbal cues that observers utilize to infer information about subsequent perceptual-motor performance.


2021 ◽  
pp. 174702182110471
Author(s):  
Yongna Li ◽  
Ziwei Chen ◽  
Xun Liu ◽  
Yue Qi

People can make trustworthiness judgements based on facial characteristics. However, the previous findings regarding on whether facial age influences interpersonal trust are inconsistent. Using the trust game, the current study investigated the interactions of facial age with attractiveness and emotional expression in regarding to trustworthiness judgements. In experiments 1 & 2, younger participants were asked to invest in either younger or older faces that were shown for 2000 ms and 33 ms respectively. The results showed that people trust the faces of older people more than they do of younger people. There was also an interaction between facial age and attractiveness. The participants invested more money in older faces than in younger faces only when they perceived the faces to be less attractive. However, the interaction between facial age and emotional expression was inconsistent in the two experiments. The participants invested more money in older faces that were shown for 2000 ms when they perceived the happy and sad emotions, but they invested more money in older faces that were shown for 33 ms when they perceived the happy emotion. These results reveal that people make trustworthiness judgements based on multiple facial cues when they view strangers of different ages.


2021 ◽  
Author(s):  
Bastian Jaeger

Trustworthiness perceptions are based on facial features that are seen as trustworthy by most people (e.g., resemblance to a smile) and features that are only seen as trustworthy by a specific perceiver (e.g., resemblance to a loved one). In other words, trustworthiness perceptions reflect consensual and idiosyncratic judgment components. Yet, when examining the influence of facial cues on social decision-making previous studies have almost exclusively focused on consensual judgments, ignoring the potential role of idiosyncratic judgments. Results of two studies, with 491 participants making 15,656 trust decisions, showed that consensual and idiosyncratic trustworthiness judgments independently influenced participants’ likelihood to trust an interaction partner, with no significant differences in the magnitude of the effects. These results highlight the need to consider both consensual and idiosyncratic judgments. Previous work, which only focused on the effect of consensual judgments, may have underestimated the overall influence of trustworthiness perceptions on social decision-making.


Sign in / Sign up

Export Citation Format

Share Document