Recognition of Facial Expressions and Its Application to Human Computer Interaction

Author(s):  
T. Onisawa ◽  
S. Kitazaki
Author(s):  
Qing Chen ◽  
Marius D. Cordea ◽  
Emil M. Petriu ◽  
Annamaria R. Varkonyi Koczy ◽  
Thomas E. Whalen

2012 ◽  
Vol 3 (2) ◽  
pp. 48-67 ◽  
Author(s):  
Lena Quinto ◽  
William Forde Thompson

Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only “experts” can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians (Experiment 1) and musicians (Experiment 3) were progressively presented with pairs of melodies that each differed in an acoustic attribute (e.g., intensity - loud vs. soft). For each pair, participants chose the melody that most strongly conveyed a target emotion (anger, fear, happiness, sadness or tenderness). Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians (Experiment 2) and musicians (Experiment 4). Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.


Author(s):  
Zhe Xu ◽  
David John ◽  
Anthony C. Boucouvalas

As the popularity of the Internet has expanded, an increasing number of people spend time online. More than ever, individuals spend time online reading news, searching for new technologies, and chatting with others. Although the Internet was designed as a tool for computational calculations, it has now become a social environment with computer-mediated communication (CMC). Picard and Healey (1997) demonstrated the potential and importance of emotion in human-computer interaction, and Bates (1992) illustrated the roles that emotion plays in user interactions with synthetic agents. Is emotion communication important for human-computer interaction? Scott and Nass (2002) demonstrated that humans extrapolate their interpersonal interaction patterns onto computers. Humans talk to computers, are angry with them, and even make friends with them. In our previous research, we demonstrated that social norms applied in our daily life are still valid for human-computer interaction. Furthermore, we proved that providing emotion visualisation in the human-computer interface could significantly influence the perceived performances and feelings of humans. For example, in an online quiz environment, human participants answered questions and then a software agent judged the answers and presented either a positive (happy) or negative (sad) expression. Even if two participants performed identically and achieved the same number of correct answers, the perceived performance for the one in the positive-expression environment is significantly higher than the one in the negative-expression environment (Xu, 2005). Although human emotional processes are much more complex than in the above example and it is difficult to build a complete computational model, various models and applications have been developed and applied in human-agent interaction environments such as the OZ project (Bates, 1992), the Cathexis model (Velasquez, 1997), and Elliot’s (1992) affective reasoner. We are interested in investigating the influences of emotions not only for human-agent communication, but also for online human-human communications. The first question is, can we detect a human’s emotional state automatically and intelligently? Previous works have concluded that emotions can be detected in various ways—in speech, in facial expressions, and in text—for example, investigations that focus on the synthesis of facial expressions and acoustic expression including Kaiser and Wehrle (2000), Wehrle, Kaiser, Schmidt, and Scherer (2000), and Zentner and Scherer (1998). As text is still dominating online communications, we believe that emotion detection in textual messages is particularly important.


Author(s):  
Zhe Xu ◽  
David John ◽  
Anthony C. Boucouvalas

As the popularity of the Internet has expanded, an increasing number of people spend time online. More than ever, individuals spend time online reading news, searching for new technologies, and chatting with others. Although the Internet was designed as a tool for computational calculations, it has now become a social environment with computer-mediated communication (CMC). Picard and Healey (1997) demonstrated the potential and importance of emotion in human-computer interaction, and Bates (1992) illustrated the roles that emotion plays in user interactions with synthetic agents. Is emotion communication important for human-computer interaction? Scott and Nass (2002) demonstrated that humans extrapolate their interpersonal interaction patterns onto computers. Humans talk to computers, are angry with them, and even make friends with them. In our previous research, we demonstrated that social norms applied in our daily life are still valid for human-computer interaction. Furthermore, we proved that providing emotion visualisation in the human-computer interface could significantly influence the perceived performances and feelings of humans. For example, in an online quiz environment, human participants answered questions and then a software agent judged the answers and presented either a positive (happy) or negative (sad) expression. Even if two participants performed identically and achieved the same number of correct answers, the perceived performance for the one in the positive-expression environment is significantly higher than the one in the negative-expression environment (Xu, 2005). Although human emotional processes are much more complex than in the above example and it is difficult to build a complete computational model, various models and applications have been developed and applied in human-agent interaction environments such as the OZ project (Bates, 1992), the Cathexis model (Velasquez, 1997), and Elliot’s (1992) affective reasoner. We are interested in investigating the influences of emotions not only for human-agent communication, but also for online human-human communications. The first question is, can we detect a human’s emotional state automatically and intelligently? Previous works have concluded that emotions can be detected in various ways—in speech, in facial expressions, and in text—for example, investigations that focus on the synthesis of facial expressions and acoustic expression including Kaiser and Wehrle (2000), Wehrle, Kaiser, Schmidt, and Scherer (2000), and Zentner and Scherer (1998). As text is still dominating online communications, we believe that emotion detection in textual messages is particularly important.


Author(s):  
Shuai Ren

A facial recognition device is a technology that can be used for authenticating a human face by ID verification and for calculating facial features from a single image using a visual image or a video to a facial database. Computer vision’s main objective enables identifying images, detecting speech, the concentration of attention, facial expressions, and recognition of emotions. Face identification is an important part of the understanding of audiovisual speech. In this paper, Intelligent Schemes for human-computer interaction (IS-HCI) provides a broader and more expressive range of computer-based vision features for processing data from one or more sensors. In particular cases of impairment, alternative interaction methods dependent on computer vision can be effectively modified. Facial identification is a biometric safety category. This paper aims to help technology-based concepts, allowing a child with a debilitating neuromuscular disorder to communicate with the robot by recognizing facial expressions. The proposed model was suggested to test in photographs taken from children’s videos, and the preliminary findings show that computational interactions with children through facial expression would break down barriers to their relationship with machines with people with limited mobility. Thus the experimental results show the proposed method IS-HCI to enhance facial expression and effectively interact with the children to achieve sensitivity (94.12%), specificity (96.84%), recognition rate (94.23%), probability (21.78%), accuracy (96.69%), expression prediction rate (98.14%), muscle activity level (79.84%) compared to other methods.


Sign in / Sign up

Export Citation Format

Share Document