scholarly journals Smiling By Way of Zygomatic Electrical Stimulation: Investigating the Facial Feedback Hypothesis

2021 ◽  
pp. 16-22
Author(s):  
Laura Warren

While it is widely accepted that affective states precede facial expressions, the facial feedback hypothesis (FFH) proposes the inverse. The FFH postulates that facial muscle region activity (e.g., smiling or frowning) directly influences the experience of emotion. The purpose of the present study was to evaluate the validity of the FFH - specifically whether smiling independently enhances positive mood.

2020 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Lowell Gaertner ◽  
Brooke Frohlich ◽  
Jeff T. Larsen ◽  
Dana Basnight-Brown

The facial feedback hypothesis suggests that an individual’s facial expressions can influence their emotional experience (e.g., that smiling can make one feel happier). However, a reoccurring concern is that demand characteristics drive this effect. Across three experiments (n = 250, 192, 131), university students in the United States and Kenya posed happy, angry, and neutral expressions and self-reported their emotions following a demand characteristics manipulation. To manipulate demand characteristics we either (a) told participants we hypothesized their poses would influence their emotions, (b) told participants we hypothesized their poses would not influence their emotions, or (c) did not tell participants a hypothesis. Results indicated that demand characteristics moderated the effects of facial poses on self-reported emotion. However, facial poses still influenced self-reported emotion when participants were told we hypothesized their poses would not influence emotion. These results indicate that facial feedback effects are not solely an artifact of demand characteristics.


2019 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
David Scott March ◽  
Fernando Marmolejo-Ramos ◽  
Nwadiogo Chisom Arinze ◽  
Izuchukwu Lawrence Gabriel Ndukaihe ◽  
...  

The facial feedback hypothesis suggests that an individual’s subjective experience of emotion is influenced by their facial expressions. Researchers, however, currently face conflicting narratives about whether this hypothesis is valid. A large replication effort consistently failed to replicate a seminal demonstration of the facial feedback hypothesis, but meta-analysis suggests the effect is real. To address this uncertainty, a large team of researchers—some advocates of the facial feedback hypothesis, some critics, and some without strong belief—collaborated to specify the best ways to test this hypothesis. Two pilot tests suggested that smiling could both magnify ongoing feelings of happiness and initiate feelings of happiness in otherwise non-emotional scenarios. Next, multiple research sites will perform more extensive tests to examine whether there is a replicable facial feedback effect.


2012 ◽  
Vol 30 (4) ◽  
pp. 361-367 ◽  
Author(s):  
Lisa P. Chan ◽  
Steven R. Livingstone ◽  
Frank A. Russo

We examined facial responses to audio-visual presentations of emotional singing. Although many studies have now found evidence for facial responses to emotional stimuli, most have involved static facial expressions and none have involved singing. Singing represents a dynamic ecologically valid emotional stimulus with unique demands on orofacial motion that are independent of emotion, related to pitch and linguistic production. Observers’ facial muscles were recorded with electromyography while they saw and heard recordings of a vocalist’s performance sung with different emotional intentions (happy, neutral, and sad). Audio-visual presentations successfully elicited facial mimicry in observers that were congruent with the performer’s intended emotions. Happy singing performances elicited increased activity in the zygomaticus major muscle region of observers, while sad performances evoked increased activity in the corrugator supercilii muscle region. These spontaneous facial muscle responses occurred within the first three seconds following onset of video presentation indicating that emotional nuances of singing performances can elicit dynamic facial responses from observers.


2017 ◽  
Author(s):  
Nicholas Alvaro Coles ◽  
Jeff T. Larsen ◽  
Heather Lench

The facial feedback hypothesis suggests that an individual’s experience of emotion is influenced by feedback from their facial movements. To evaluate the cumulative evidence for this hypothesis, we conducted a meta-analysis on 286 effect sizes derived from 138 studies that manipulated facial feedback and collected emotion self-reports. Using random effects meta-regression with robust variance estimates, we found that the overall effect of facial feedback was significant, but small. Results also indicated that feedback effects are stronger in some circumstances than others. We examined 12 potential moderators, and three were associated with differences in effect sizes. 1. Type of emotional outcome: Facial feedback influenced emotional experience (e.g., reported amusement) and, to a greater degree, affective judgments of a stimulus (e.g., the objective funniness of a cartoon). Three publication bias detection methods did not reveal evidence of publication bias in studies examining the effects of facial feedback on emotional experience, but all three methods revealed evidence of publication bias in studies examining affective judgments. 2. Presence of emotional stimuli: Facial feedback effects on emotional experience were larger in the absence of emotionally evocative stimuli (e.g., cartoons). 3. Type of stimuli: When participants were presented with emotionally evocative stimuli, facial feedback effects were larger in the presence of some types of stimuli (e.g., emotional sentences) than others (e.g., pictures). The available evidence supports the facial feedback hypothesis’ central claim that facial feedback influences emotional experience, although these effects tend to be small and heterogeneous.


2018 ◽  
Vol 115 (43) ◽  
pp. E10013-E10021 ◽  
Author(s):  
Chaona Chen ◽  
Carlos Crivelli ◽  
Oliver G. B. Garrod ◽  
Philippe G. Schyns ◽  
José-Miguel Fernández-Dols ◽  
...  

Real-world studies show that the facial expressions produced during pain and orgasm—two different and intense affective experiences—are virtually indistinguishable. However, this finding is counterintuitive, because facial expressions are widely considered to be a powerful tool for social interaction. Consequently, debate continues as to whether the facial expressions of these extreme positive and negative affective states serve a communicative function. Here, we address this debate from a novel angle by modeling the mental representations of dynamic facial expressions of pain and orgasm in 40 observers in each of two cultures (Western, East Asian) using a data-driven method. Using a complementary approach of machine learning, an information-theoretic analysis, and a human perceptual discrimination task, we show that mental representations of pain and orgasm are physically and perceptually distinct in each culture. Cross-cultural comparisons also revealed that pain is represented by similar face movements across cultures, whereas orgasm showed distinct cultural accents. Together, our data show that mental representations of the facial expressions of pain and orgasm are distinct, which questions their nondiagnosticity and instead suggests they could be used for communicative purposes. Our results also highlight the potential role of cultural and perceptual factors in shaping the mental representation of these facial expressions. We discuss new research directions to further explore their relationship to the production of facial expressions.


Author(s):  
Oryina Kingsley Akputu ◽  
Kah Phooi Seng ◽  
Yun Li Lee

This chapter describes how a machine vision approach could be utilized for tracking learning feedback information on emotions for enhanced teaching and learning with Intelligent Tutoring Systems (ITS). The chapter focuses on analyzing learners’ emotions to show how affective states account for personalization or traceability for learning feedback. The chapter achieves this goal in three ways: (1) by presenting a comprehensive review of adaptive educational learning systems, particularly inspired by machine vision approaches; (2) by proposing an affective model for monitoring learners’ emotions and engagement with educational learning systems; (3) by presenting a case-based technique as an experimental prototype for the proposed affective model, where students’ facial expressions are tracked in the course of studying a composite video lecture. Results of the experiments indicate the superiority of such emotion-aware systems over emotion-unaware ones, achieving a significant performance increment of 71.4%.


Sign in / Sign up

Export Citation Format

Share Document