facial expressions of emotion
Recently Published Documents


TOTAL DOCUMENTS

386
(FIVE YEARS 62)

H-INDEX

58
(FIVE YEARS 4)

2021 ◽  
Vol 2 (1) ◽  
Author(s):  
James Darmawan ◽  
Irwan Harnoko

UNESCO international recognition in 2003 stated that wayang is one of Indonesia's cultural heritage, but the sense of belonging for wayang is still low among the nation's next generation of Indonesia. So at this time, both the government and the educators, still working to improve the regeneration of wayang knowledge and recognition for the young nation and the world, including through game application. In this study, the researchers created a contemporary virtual character of wayang, and will be inserted as a character sticker chatting and game application. This wayang virtual characters can be used as tools for reintroduction to young people around the world. This method will be conducted by researchers include depictions classification of facial expressions of emotion, collaborate with simplified visual approach, to obtain the corresponding visual characteristics of the wayang virtual characters. Researchers also used visual basic character as a prototype for contemporary form deformation, so it can be applied to the target games application and communication functions.


2021 ◽  
Vol 15 ◽  
Author(s):  
Huiyu Zhou ◽  
Ling Li ◽  
Shiguang Shan ◽  
Shuo Wang ◽  
Jian K. Liu

2021 ◽  
Vol 21 (9) ◽  
pp. 2932
Author(s):  
Jonas Nölle ◽  
Chaona Chen ◽  
Laura B. Hensel ◽  
Oliver G. B. Garrod ◽  
Philippe G. Schyns ◽  
...  

2021 ◽  
Vol 63 (7) ◽  
Author(s):  
Evin Aktar ◽  
Cosima A. Nimphy ◽  
Mariska E. Kret ◽  
Koraly Pérez‐Edgar ◽  
Susan M. Bögels ◽  
...  

Author(s):  
Diana Kayser ◽  
Hauke Egermann ◽  
Nick E. Barraclough

AbstractAn abundance of studies on emotional experiences in response to music have been published over the past decades, however, most have been carried out in controlled laboratory settings and rely on subjective reports. Facial expressions have been occasionally assessed but measured using intrusive methods such as facial electromyography (fEMG). The present study investigated emotional experiences of fifty participants in a live concert. Our aims were to explore whether automated face analysis could detect facial expressions of emotion in a group of people in an ecologically valid listening context, to determine whether emotions expressed by the music predicted specific facial expressions and examine whether facial expressions of emotion could be used to predict subjective ratings of pleasantness and activation. During the concert, participants were filmed and facial expressions were subsequently analyzed with automated face analysis software. Self-report on participants’ subjective experience of pleasantness and activation were collected after the concert for all pieces (two happy, two sad). Our results show that the pieces that expressed sadness resulted in more facial expressions of sadness (compared to happiness), whereas the pieces that expressed happiness resulted in more facial expressions of happiness (compared to sadness). Differences for other facial expression categories (anger, fear, surprise, disgust, and neutral) were not found. Independent of the musical piece or emotion expressed in the music facial expressions of happiness predicted ratings of subjectively felt pleasantness, whilst facial expressions of sadness and disgust predicted low and high ratings of subjectively felt activation, respectively. Together, our results show that non-invasive measurements of audience facial expressions in a naturalistic concert setting are indicative of emotions expressed by the music, and the subjective experiences of the audience members themselves.


2021 ◽  
Vol 12 ◽  
Author(s):  
Shushi Namba

Facial expressions of emotion can convey information about the world and disambiguate elements of the environment, thus providing direction to other people’s behavior. However, the functions of facial expressions from the perspective of learning patterns over time remain elusive. This study investigated how the feedback of facial expressions influences learning tasks in a context of ambiguity using the Iowa Gambling Task. The results revealed that the learning rate for facial expression feedback was slower in the middle of the learning period than it was for symbolic feedback. No difference was observed in deck selection or computational model parameters between the conditions, and no correlation was observed between task indicators and the results of depressive questionnaires.


2021 ◽  
pp. 104225872110297
Author(s):  
Blakley C. Davis ◽  
Benjamin J. Warnick ◽  
Aaron H. Anglin ◽  
Thomas H. Allison

Crowdfunded microlending research implies that both communal and agentic characteristics are valued. These characteristics, however, are often viewed as being at odds with one another due to their association with gender stereotypes. Drawing upon expectancy violation theory and research on gender stereotypes, we theorize that gender-counterstereotypical facial expressions of emotion provide a means for entrepreneurs to project “missing” agentic or communal characteristics. Leveraging computer-aided facial expression analysis to analyze entrepreneur photographs from 43,210 microloan appeals, we show that women benefit from stereotypically masculine facial expressions of anger and disgust, whereas men benefit from stereotypically feminine facial expressions of sadness and happiness.


2021 ◽  
Vol 12 ◽  
Author(s):  
Paula J. Webster ◽  
Shuo Wang ◽  
Xin Li

Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.


Sign in / Sign up

Export Citation Format

Share Document