scholarly journals Are Facial Displays Social? Situational Influences in the Attribution of Emotion to Facial Expressions

2002 ◽  
Vol 5 (2) ◽  
pp. 119-124 ◽  
Author(s):  
José-Miguel Fernández-Dols ◽  
Pilar Carrera ◽  
James A. Russell

Observers are remarkably consistent in attributing particular emotions to particular facial expressions, at least in Western societies. Here, we suggest that this consistency is an instance of the fundamental attribution error. We therefore hypothesized that a small variation in the procedure of the recognition study, which emphasizes situational information, would change the participants' attributions. In two studies, participants were asked to judge whether a prototypical “emotional facial expression” was more plausibly associated with a social-communicative situation (one involving communication to another person) or with an equally emotional but nonsocial, situation. Participants were found more likely to associate each facial display with the social than with the nonsocial situation. This result was found across all emotions presented (happiness, fear, disgust, anger, and sadness) and for both Spanish and Canadian participants.

2015 ◽  
Vol 5 ◽  
Author(s):  
Michal Olszanowski ◽  
Grzegorz Pochwatko ◽  
Krzysztof Kuklinski ◽  
Michal Scibor-Rylski ◽  
Peter Lewinski ◽  
...  

2007 ◽  
Vol 21 (2) ◽  
pp. 100-108 ◽  
Author(s):  
Michela Balconi ◽  
Claudio Lucchiari

Abstract. In this study we analyze whether facial expression recognition is marked by specific event-related potential (ERP) correlates and whether conscious and unconscious elaboration of emotional facial stimuli are qualitatively different processes. ERPs elicited by supraliminal and subliminal (10 ms) stimuli were recorded when subjects were viewing emotional facial expressions of four emotions or neutral stimuli. Two ERP effects (N2 and P3) were analyzed in terms of their peak amplitude and latency variations. An emotional specificity was observed for the negative deflection N2, whereas P3 was not affected by the content of the stimulus (emotional or neutral). Unaware information processing proved to be quite similar to aware processing in terms of peak morphology but not of latency. A major result of this research was that unconscious stimulation produced a more delayed peak variation than conscious stimulation did. Also, a more posterior distribution of the ERP was found for N2 as a function of emotional content of the stimulus. On the contrary, cortical lateralization (right/left) was not correlated to conscious/unconscious stimulation. The functional significance of our results is underlined in terms of subliminal effect and emotion recognition.


2020 ◽  
Vol 41 (2) ◽  
pp. 162-182
Author(s):  
Antonio González-Rodríguez ◽  
Marta Godoy-Giménez ◽  
Fernando Cañadas ◽  
Pablo Sayans-Jiménez ◽  
Angeles F. Estévez

AbstractSchizotypy is defined as a combination of traits qualitatively similar to those found in schizophrenia, though in a minor severity, that can be found in the nonclinical population. Some studies suggest that people with schizotypal traits have problems recognising emotional facial expressions. In this research, we further explore this issue and we investigate, for the first time, whether the differential outcomes procedure (DOP) may improve the recognition of emotional facial expressions. Participants in our study were students that completed the ESQUIZO-Q-A and were set in two groups, high schizotypy (HS) and low schizotypy (LS). Then, they performed a task in which they had to recognise the emotional facial expression of a set of faces. Participants of the HS group and the LS group did not differ in their performance. Importantly, all participants showed better recognition of emotional facial expressions when they were trained with differential outcomes. This novel finding might be relevant for clinical practice since the DOP is shown as a tool that may improve the recognition of emotional facial expressions.


2020 ◽  
Vol 6 (4) ◽  
pp. 277-288
Author(s):  
Philip To Lai

Purpose The purpose of this study is to investigate the social and affective aspects of communication in school-age children with HFA and school-age children with WS using a micro-analytic approach. Social communication is important for success at home, school, work and in the community. Lacking the ability to effectively process and convey information can lead to deficits in social communication. Individuals with high functioning autism (HFA) and individuals with Williams syndrome (WS) often have significant impairments in social communication that impact their relationships with others. Currently, little is known about how school-age children use and integrate verbal and non-verbal behaviors in the context of a social interaction. Design/methodology/approach A micro-analytic coding scheme was devised to reveal which channels children use to convey information. Language, eye gaze behaviors and facial expressions of the child were coded during this dyadic social interaction. These behaviors were coded throughout the entire interview, as well as when the child was the speaker and when the child was the listener. Findings Language results continue to pose problems for the HFA and WS groups compared to their typically developing (TD) peers. For non-verbal communicative behaviors, a qualitative difference in the use of eye gaze was found between the HFA and WS groups. For facial expression, the WS and TD groups produced more facial expressions than the HFA group. Research limitations/implications No differences were observed in the HFA group when playing different roles in a conversation, suggesting they are not as sensitive to the social rules of a conversation as their peers. Insights from this study add knowledge toward understanding social-communicative development in school-age children. Originality/value In this study, two non-verbal behaviors will be assessed in multiple contexts: the entire biographical interview, when the child is the speaker and when the child is the listener. These social and expressive measures give an indication of how expressive school-age children are and provide information on their attention, affective state and communication skills when conversing with an adult. Insights from this study will add knowledge toward understanding social-communicative development in school-age children.


2015 ◽  
Vol 22 (9) ◽  
pp. 890-899 ◽  
Author(s):  
Giovanna Mioni ◽  
Lucia Meligrana ◽  
Simon Grondin ◽  
Francesco Perini ◽  
Luigi Bartolomei ◽  
...  

AbstractPrevious studies have demonstrated that emotional facial expressions alter temporal judgments. Moreover, while some studies conducted with Parkinson's disease (PD) patients suggest dysfunction in the recognition of emotional facial expression, others have shown a dysfunction in time perception. In the present study, we investigate the magnitude of temporal distortions caused by the presentation of emotional facial expressions (anger, shame, and neutral) in PD patients and controls. Twenty-five older adults with PD and 17 healthy older adults took part in the present study. PD patients were divided into two sub-groups, with and without mild cognitive impairment (MCI), based on their neuropsychological performance. Participants were tested with a time bisection task with standard intervals lasting 400 ms and 1600 ms. The effect of facial emotional stimuli on time perception was evident in all participants, yet the effect was greater for PD-MCI patients. Furthermore, PD-MCI patients were more likely to underestimate long and overestimate short temporal intervals than PD-non-MCI patients and controls. Temporal impairment in PD-MCI patients seem to be mainly caused by a memory dysfunction. (JINS, 2016, 22, 890–899)


2020 ◽  
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others’ facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography (EMG) signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behavior, and replicated earlier findings of faster and more accurate responses in congruent vs. incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, as compared to frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2020 ◽  
Author(s):  
Joshua W Maxwell ◽  
Eric Ruthruff ◽  
michael joseph

Are facial expressions of emotion processed automatically? Some authors have not found this to be the case (Tomasik et al., 2009). Here we revisited the question with a novel experimental logic – the backward correspondence effect (BCE). In three dual-task studies, participants first categorized a sound (Task 1) and then indicated the location of a target face (Task 2). In Experiment 1, Task 2 required participants to search for one facial expression of emotion (angry or happy). We observed positive BCEs, indicating that facial expressions of emotion bypassed the central attentional bottleneck and thus were processed in a capacity-free, automatic manner. In Experiment 2, we replicated this effect but found that morphed emotional expressions (which were used by Tomasik) were not processed automatically. In Experiment 3, we observed similar BCEs for another type of face processing previously shown to be capacity-free – identification of familiar faces (Jung et al., 2013). We conclude that facial expressions of emotion are identified automatically when sufficiently unambiguous.


2021 ◽  
pp. 174702182199299
Author(s):  
Mohamad El Haj ◽  
Emin Altintas ◽  
Ahmed A Moustafa ◽  
Abdel Halim Boudoukha

Future thinking, which is the ability to project oneself forward in time to pre-experience an event, is intimately associated with emotions. We investigated whether emotional future thinking can activate emotional facial expressions. We invited 43 participants to imagine future scenarios, cued by the words “happy,” “sad,” and “city.” Future thinking was video recorded and analysed with a facial analysis software to classify whether facial expressions (i.e., happy, sad, angry, surprised, scared, disgusted, and neutral facial expression) of participants were neutral or emotional. Analysis demonstrated higher levels of happy facial expressions during future thinking cued by the word “happy” than “sad” or “city.” In contrast, higher levels of sad facial expressions were observed during future thinking cued by the word “sad” than “happy” or “city.” Higher levels of neutral facial expressions were observed during future thinking cued by the word “city” than “happy” or “sad.” In the three conditions, the neutral facial expressions were high compared with happy and sad facial expressions. Together, emotional future thinking, at least for future scenarios cued by “happy” and “sad,” seems to trigger the corresponding facial expression. Our study provides an original physiological window into the subjective emotional experience during future thinking.


Sign in / Sign up

Export Citation Format

Share Document