intended expression
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 2)

H-INDEX

2
(FIVE YEARS 0)

2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


2021 ◽  
Author(s):  
Xiaoming Jiang

Communicative expression is a cross-species phenomenon. We investigated the perceptual attributes of social expressions encoded in human-like animal stickers commonly used as nonverbal communicative tools on social media (e.g. WeChat). One hundred and twenty animal stickers which varied in 12 categories of social expressions (serving pragmatic or emotional functions), 5 animal kinds (cats, dogs, ducks, rabbits, pigs) and 2 presented forms (real animal vs. cartoon animal) were presented to social media users, who were asked to rate on the human likeness, the cuteness, the expressiveness and the matchness of each intended expression against the given label. The data shows that the kind of animal that is expected to best encode a certain expression is modulated by its presented forms. The “cuteness” stereotype towards a certain kind of animal is sometimes violated as a function of the presented forms. Moreover, user’s gender, interpersonal sensitivity and attitudes towards the ethic use of animals modulated various perceptual attributes. These findings highlight the factors underlying the decoding of social meanings in human-like animal stickers as nonverbal cues in virtual communication.


2004 ◽  
Vol 52 (4) ◽  
pp. 357-368 ◽  
Author(s):  
Deborah A. Sheldon

This study focused on listeners' ( N=66 undergraduate and graduate music education majors) ability to identify nuances of musical expression using figurative language and specific music terminology. Data reviewed for accuracy in classifying general expressive categories showed that listeners were successful at identifying broad intended realms of expression with both figurative statements and terminology. When outcomes were reviewed for accuracy in terms of specific intended expression rather than general expressive category, accuracy levels dropped. Mimicking the results of general expressive identification, response type (figurative statements or terminology) did not seem to be a factor in response accuracy. Within general expressive categories, listeners selected a wider variety of responses among figurative statements than terminology, suggesting greater ambiguity of meaning among the former than the latter. Although not a primary focus, outcomes as a function of sound manipulation by the performer were given cursory review. This variable may play a role in a listener's ability to grasp a musical expression. February 27, 2004 October 20, 2004.


1996 ◽  
Vol 78 (2) ◽  
pp. 555-561
Author(s):  
Derek Scott

With the increasing popularity of low-cost videocommunications systems and their attendant low bandwidth-resolution problems, receivers are likely to experience difficulties in recognising the sender's intended expression and feeling. As an extension of previous work in the area, this pilot study investigated the effects of three forms of degraded imaging (and their combined effects) on the nonverbal aspects of communication for one person. Degraded video messages significantly impaired the meaning of the sender.


Sign in / Sign up

Export Citation Format

Share Document