neutral expression
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 19)

H-INDEX

10
(FIVE YEARS 1)

Perception ◽  
2021 ◽  
pp. 030100662110270
Author(s):  
Kennon M. Sheldon ◽  
Ryan Goffredi ◽  
Mike Corcoran

Facial expressions of emotion have important communicative functions. It is likely that mask-wearing during pandemics disrupts these functions, especially for expressions defined by activity in the lower half of the face. We tested this by asking participants to rate both Duchenne smiles (DSs; defined by the mouth and eyes) and non-Duchenne or “social” smiles (SSs; defined by the mouth alone), within masked and unmasked target faces. As hypothesized, masked SSs were rated much lower in “a pleasant social smile” and much higher in “a merely neutral expression,” compared with unmasked SSs. Essentially, masked SSs became nonsmiles. Masked DSs were still rated as very happy and pleasant, although significantly less so than unmasked DSs. Masked DSs and SSs were both rated as displaying more disgust than the unmasked versions.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yoonji Kim ◽  
Diana Van Lancker Sidtis ◽  
John J. Sidtis

Recent studies have demonstrated that details of verbal material are retained in memory. Further, converging evidence points to a memory-enhancing effect of emotion such that memory for emotional events is stronger than memory for neutral events. Building upon this work, it appears likely that verbatim sentence forms will be remembered better when tinged with emotional nuance. Most previous studies have focused on single words. The current study examines the role of emotional nuance in the verbatim retention of longer sentences in written material. In this study, participants silently read transcriptions of spontaneous narratives, half of which had been delivered within a context of emotional expression and the other half with neutral expression. Transcripts were taken from selected narratives that received the highest, most extreme ratings, neutral or emotional. Participants identified written excerpts in a yes/no recognition test. Results revealed that participants’ verbatim memory was significantly greater for excerpts from emotionally nuanced narratives than from neutral narratives. It is concluded that the narratives, pre-rated as emotional or neutral, drove this effect of emotion on verbatim retention. These findings expand a growing body of evidence for a role of emotion in memory, and lend support to episodic theories of language and the constructionist account.


2021 ◽  
Vol 12 ◽  
Author(s):  
Huazhan Yin ◽  
Xiaobing Cui ◽  
Youling Bai ◽  
Gege Cao ◽  
Li Zhang ◽  
...  

Little is known about the electrophysiological basis of the effect of threat-related emotional stimuli with different motivational direction on duration perception. Thus, event-related potentials were employed to examine the effects of angry expressions and fearful expressions on perception of different duration (490–910 ms). Behavioral results showed there was a greater underestimation of the duration of angry expressions (approach-motivated negative stimuli) than fearful expressions (withdrawal-motivated negative stimuli), compared with neutral expressions. Event-related potentials results showed that, the area of Contingent Negative Variation (CNV) evoked by angry expression, fearful expression and neutral expression gradually increased. These results indicated that specific electrophysiological mechanisms may underlie the attention effects of angry and fearful expressions on timing. Specifically, compared with neutral expressions, fearful expressions and angry expressions both are likely to distract more attentional resources from timer, in particular, angry expressions attract more attention resources than fearful expressions from timer. The major contribution of the current study is to provide electrophysiological evidences of fear vs. anger divergence in the aspect of time perception and to demonstrate beyond the behavioral level that the categorization of threat-related emotions should be refined so to highlight the adaptability of the human defense system.


2021 ◽  
Vol 12 ◽  
Author(s):  
Juliana Gioia Negrão ◽  
Ana Alexandra Caldas Osorio ◽  
Rinaldo Focaccia Siciliano ◽  
Vivian Renne Gerber Lederman ◽  
Elisa Harumi Kozasa ◽  
...  

Background: This study developed a photo and video database of 4-to-6-year-olds expressing the seven induced and posed universal emotions and a neutral expression. Children participated in photo and video sessions designed to elicit the emotions, and the resulting images were further assessed by independent judges in two rounds.Methods: In the first round, two independent judges (1 and 2), experts in the Facial Action Coding System, firstly analysed 3,668 emotions facial expressions stimuli from 132 children. Both judges reached 100% agreement regarding 1,985 stimuli (124 children), which were then selected for a second round of analysis between judges 3 and 4.Results: The result was 1,985 stimuli (51% of the photographs) were produced from 124 participants (55% girls). A Kappa index of 0.70 and an accuracy of 73% between experts were observed. Lower accuracy was found for emotional expression by 4-year-olds than 6-year-olds. Happiness, disgust and contempt had the highest agreement. After a sub-analysis evaluation of all four judges, 100% agreement was reached for 1,381 stimuli which compound the ChildEFES database with 124 participants (59% girls) and 51% induced photographs. The number of stimuli of each emotion were: 87 for neutrality, 363 for happiness, 170 for disgust, 104 for surprise, 152 for fear, 144 for sadness, 157 for anger 157, and 183 for contempt.Conclusions: The findings show that this photo and video database can facilitate research on the mechanisms involved in early childhood recognition of facial emotions in children, contributing to the understanding of facial emotion recognition deficits which characterise several neurodevelopmental and psychiatric disorders.


Author(s):  
Keon M. Parsa ◽  
Ish A. Talati ◽  
Haijun Wang ◽  
Eugenia Chu ◽  
Lily Talakoub ◽  
...  

AbstractThe use of filters and editing tools for perfecting selfies is increasing. While some aesthetic experts have touted the ability of this technology to help patients convey their aesthetic goals, others have expressed concerns about the unrealistic expectations that may come from the ability for individuals to digitally alter their own photos in these so-called “super-selfies.” The aim of the study is to determine the changes that individuals seek when enhancing selfies. Twenty subjects participated in this study between July 25 and September 24, 2019. Subjects had two sets of headshots taken (neutral and smile) and were provided an introduction on the use of the Facetune2 app. Subjects received a digital copy of their photographs and were asked to download the free mobile app. After 1 week of trialing the different tools for enhancing their appearance, subjects submitted their self-determined most attractive edited photographs. Changes in marginal reflex distance (MRD) 1 and 2, nose height and width, eyebrow height, facial width, skin smoothness, skin hue, and saturation as well as overall image brightness were recorded. Paired two-tailed t-test was used to evaluate pre- and post-facial measurements. There were no statistically significant changes identified in the analysis of the altered photos in neutral expression. Analysis of all smiling photographs revealed that subjects increased their smile angle (right: +2.92 mm, p = 0.04; left: +3.58 mm, p < 0.001). When smiling photographs were assessed by gender, females were found to significantly increase their MRD2 (right: +0.64 mm, p = 0.04; left: +0.74 mm, p = 0.05) and their smile angle (right: +1.90 mm, p = 0.03; left: +2.31 mm, p = 0.005) while also decreasing their nose height (−2.8 mm, p = 0.04). Males did not significantly alter any of the facial measurements assessed. This study identifies the types of changes that individuals seek when enhancing selfies and specifies the different aspects of image adjustment that may be sought based on a patient's gender.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0246001
Author(s):  
Patricia Fernández-Sotos ◽  
Arturo S. García ◽  
Miguel A. Vicente-Querol ◽  
Guillermo Lahera ◽  
Roberto Rodriguez-Jimenez ◽  
...  

The ability to recognise facial emotions is essential for successful social interaction. The most common stimuli used when evaluating this ability are photographs. Although these stimuli have proved to be valid, they do not offer the level of realism that virtual humans have achieved. The objective of the present paper is the validation of a new set of dynamic virtual faces (DVFs) that mimic the six basic emotions plus the neutral expression. The faces are prepared to be observed with low and high dynamism, and from front and side views. For this purpose, 204 healthy participants, stratified by gender, age and education level, were recruited for assessing their facial affect recognition with the set of DVFs. The accuracy in responses was compared with the already validated Penn Emotion Recognition Test (ER-40). The results showed that DVFs were as valid as standardised natural faces for accurately recreating human-like facial expressions. The overall accuracy in the identification of emotions was higher for the DVFs (88.25%) than for the ER-40 faces (82.60%). The percentage of hits of each DVF emotion was high, especially for neutral expression and happiness emotion. No statistically significant differences were discovered regarding gender. Nor were significant differences found between younger adults and adults over 60 years. Moreover, there is an increase of hits for avatar faces showing a greater dynamism, as well as front views of the DVFs compared to their profile presentations. DVFs are as valid as standardised natural faces for accurately recreating human-like facial expressions of emotions.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7184
Author(s):  
Kunyoung Lee ◽  
Eui Chul Lee

Clinical studies have demonstrated that spontaneous and posed smiles have spatiotemporal differences in facial muscle movements, such as laterally asymmetric movements, which use different facial muscles. In this study, a model was developed in which video classification of the two types of smile was performed using a 3D convolutional neural network (CNN) applying a Siamese network, and using a neutral expression as reference input. The proposed model makes the following contributions. First, the developed model solves the problem caused by the differences in appearance between individuals, because it learns the spatiotemporal differences between the neutral expression of an individual and spontaneous and posed smiles. Second, using a neutral expression as an anchor improves the model accuracy, when compared to that of the conventional method using genuine and imposter pairs. Third, by using a neutral expression as an anchor image, it is possible to develop a fully automated classification system for spontaneous and posed smiles. In addition, visualizations were designed for the Siamese architecture-based 3D CNN to analyze the accuracy improvement, and to compare the proposed and conventional methods through feature analysis, using principal component analysis (PCA).


2020 ◽  
Author(s):  
Molly Anne Bowdring ◽  
Michael Sayette ◽  
Jeffrey M. Girard ◽  
William C. Woods

Physical attractiveness plays a central role in psychosocial experiences. One of the top research priorities has been to identify factors affecting perceptions of physical attractiveness (PPA). Recent work suggests PPA derives from different sources (e.g., target, perceiver, stimulus type). Although smiles in particular are believed to enhance PPA, support has been surprisingly limited. This study comprehensively examines the effect of smiles on PPA and, more broadly, evaluates the roles of target, perceiver, and stimulus type in PPA variation. Perceivers (n = 181) rated both static images and 5-sec videos of targets displaying smiling and neutral-expressions. Smiling images were rated as more attractive than neutral-expression images (regardless of stimulus motion format). Interestingly, perceptions of physical attractiveness were based more on the perceiver than on either the target or format in which the target was presented. Results clarify the effect of smiles, and highlight the significant role of the perceiver, in PPA.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Tasmin Humphrey ◽  
Leanne Proops ◽  
Jemma Forman ◽  
Rebecca Spooner ◽  
Karen McComb

Abstract Domestic animals are sensitive to human cues that facilitate inter-specific communication, including cues to emotional state. The eyes are important in signalling emotions, with the act of narrowing the eyes appearing to be associated with positive emotional communication in a range of species. This study examines the communicatory significance of a widely reported cat behaviour that involves eye narrowing, referred to as the slow blink sequence. Slow blink sequences typically involve a series of half-blinks followed by either a prolonged eye narrow or an eye closure. Our first experiment revealed that cat half-blinks and eye narrowing occurred more frequently in response to owners’ slow blink stimuli towards their cats (compared to no owner–cat interaction). In a second experiment, this time where an experimenter provided the slow blink stimulus, cats had a higher propensity to approach the experimenter after a slow blink interaction than when they had adopted a neutral expression. Collectively, our results suggest that slow blink sequences may function as a form of positive emotional communication between cats and humans.


Sign in / Sign up

Export Citation Format

Share Document