facial expression
Recently Published Documents


TOTAL DOCUMENTS

7041
(FIVE YEARS 2148)

H-INDEX

115
(FIVE YEARS 17)

2022 ◽  
Vol 29 (2) ◽  
pp. 1-59
Author(s):  
Joni Salminen ◽  
Sercan Şengün ◽  
João M. Santos ◽  
Soon-Gyo Jung ◽  
Bernard Jansen

There has been little research into whether a persona's picture should portray a happy or unhappy individual. We report a user experiment with 235 participants, testing the effects of happy and unhappy image styles on user perceptions, engagement, and personality traits attributed to personas using a mixed-methods analysis. Results indicate that the participant's perceptions of the persona's realism and pain point severity increase with the use of unhappy pictures. In contrast, personas with happy pictures are perceived as more extroverted, agreeable, open, conscientious, and emotionally stable. The participants’ proposed design ideas for the personas scored more lexical empathy scores for happy personas. There were also significant perception changes along with the gender and ethnic lines regarding both empathy and perceptions of pain points. Implications are the facial expression in the persona profile can affect the perceptions of those employing the personas. Therefore, persona designers should align facial expressions with the task for which the personas will be employed. Generally, unhappy images emphasize realism and pain point severity, and happy images invoke positive perceptions.


Author(s):  
Afizan Azman ◽  
Mohd. Fikri Azli Abdullah ◽  
Sumendra Yogarayan ◽  
Siti Fatimah Abdul Razak ◽  
Hartini Azman ◽  
...  

<span>Cognitive distraction is one of the several contributory factors in road accidents. A number of cognitive distraction detection methods have been developed. One of the most popular methods is based on physiological measurement. Head orientation, gaze rotation, blinking and pupil diameter are among popular physiological parameters that are measured for driver cognitive distraction. In this paper, lips and eyebrows are studied. These new features on human facial expression are obvious and can be easily measured when a person is in cognitive distraction. There are several types of movement on lips and eyebrows that can be captured to indicate cognitive distraction. Correlation and classification techniques are used in this paper for performance measurement and comparison. Real time driving experiment was setup and faceAPI was installed in the car to capture driver’s facial expression. Linear regression, support vector machine (SVM), static Bayesian network (SBN) and logistic regression (LR) are used in this study. Results showed that lips and eyebrows are strongly correlated and have a significant role in improving cognitive distraction detection. Dynamic Bayesian network (DBN) with different confidence of levels was also used in this study to classify whether a driver is distracted or not.</span>


Author(s):  
SINDA ◽  
Ahmad Jum’a Khatib Nur Ali

Music has a personal interest in its individual. It can deliver happiness for those who listen to it. On the other hand, the music should correlate with its pictures through the video clip, such as color, sound, gesture, etc. This article attempts to investigate and explore the interpersonal meaning of the LATHI Song. This study was conducted qualitatively using a descriptive-analytical study to check how different semiotic and modes such as music, sound, speech, color, action, and facial expression work together to build the interpersonal meaning. LATHI song is successful in attracting audiences' attention around the world. The song's lyrics are mainly in English, except for the bridge sung in Javanese. Not only that, but the bridge also employs pelog, a Javanese seven-note scale used in gamelan arrangements. In addition, its instruments played has a unique characteristic and easy listening.


2022 ◽  
Vol 12 (2) ◽  
pp. 807
Author(s):  
Huafei Xiao ◽  
Wenbo Li ◽  
Guanzhong Zeng ◽  
Yingzhang Wu ◽  
Jiyong Xue ◽  
...  

With the development of intelligent automotive human-machine systems, driver emotion detection and recognition has become an emerging research topic. Facial expression-based emotion recognition approaches have achieved outstanding results on laboratory-controlled data. However, these studies cannot represent the environment of real driving situations. In order to address this, this paper proposes a facial expression-based on-road driver emotion recognition network called FERDERnet. This method divides the on-road driver facial expression recognition task into three modules: a face detection module that detects the driver’s face, an augmentation-based resampling module that performs data augmentation and resampling, and an emotion recognition module that adopts a deep convolutional neural network pre-trained on FER and CK+ datasets and then fine-tuned as a backbone for driver emotion recognition. This method adopts five different backbone networks as well as an ensemble method. Furthermore, to evaluate the proposed method, this paper collected an on-road driver facial expression dataset, which contains various road scenarios and the corresponding driver’s facial expression during the driving task. Experiments were performed on the on-road driver facial expression dataset that this paper collected. Based on efficiency and accuracy, the proposed FERDERnet with Xception backbone was effective in identifying on-road driver facial expressions and obtained superior performance compared to the baseline networks and some state-of-the-art networks.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262344
Author(s):  
Maria Tsantani ◽  
Vita Podgajecka ◽  
Katie L. H. Gray ◽  
Richard Cook

The use of surgical-type face masks has become increasingly common during the COVID-19 pandemic. Recent findings suggest that it is harder to categorise the facial expressions of masked faces, than of unmasked faces. To date, studies of the effects of mask-wearing on emotion recognition have used categorisation paradigms: authors have presented facial expression stimuli and examined participants’ ability to attach the correct label (e.g., happiness, disgust). While the ability to categorise particular expressions is important, this approach overlooks the fact that expression intensity is also informative during social interaction. For example, when predicting an interactant’s future behaviour, it is useful to know whether they are slightly fearful or terrified, contented or very happy, slightly annoyed or angry. Moreover, because categorisation paradigms force observers to pick a single label to describe their percept, any additional dimensionality within observers’ interpretation is lost. In the present study, we adopted a complementary emotion-intensity rating paradigm to study the effects of mask-wearing on expression interpretation. In an online experiment with 120 participants (82 female), we investigated how the presence of face masks affects the perceived emotional profile of prototypical expressions of happiness, sadness, anger, fear, disgust, and surprise. For each of these facial expressions, we measured the perceived intensity of all six emotions. We found that the perceived intensity of intended emotions (i.e., the emotion that the actor intended to convey) was reduced by the presence of a mask for all expressions except for anger. Additionally, when viewing all expressions except surprise, masks increased the perceived intensity of non-intended emotions (i.e., emotions that the actor did not intend to convey). Intensity ratings were unaffected by presentation duration (500ms vs 3000ms), or attitudes towards mask wearing. These findings shed light on the ambiguity that arises when interpreting the facial expressions of masked faces.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


Sign in / Sign up

Export Citation Format

Share Document