Retargeting 3D facial expressions in real time based on Kinect

Author(s):  
Xuan Xu ◽  
Zhongke Wu ◽  
Xuesong Wang ◽  
Mingquan Zhou
Keyword(s):  
2008 ◽  
Vol 381-382 ◽  
pp. 375-378
Author(s):  
K.T. Song ◽  
M.J. Han ◽  
F.Y. Chang ◽  
S.H. Chang

The capability of recognizing human facial expression plays an important role in advanced human-robot interaction development. Through recognizing facial expressions, a robot can interact with a user in a more natural and friendly manner. In this paper, we proposed a facial expression recognition system based on an embedded image processing platform to classify different facial expressions on-line in real time. A low-cost embedded vision system has been designed and realized for robotic applications using a CMOS image sensor and digital signal processor (DSP). The current design acquires thirty 640x480 image frames per second (30 fps). The proposed emotion recognition algorithm has been successfully implemented on the real-time vision system. Experimental results on a pet robot show that the robot can interact with a person in a responding manner. The developed image processing platform is effective for accelerating the recognition speed to 25 recognitions per second with an average on-line recognition rate of 74.4% for five facial expressions.


Author(s):  
Tadas Baltrusaitis ◽  
Daniel McDuff ◽  
Ntombikayise Banda ◽  
Marwa Mahmoud ◽  
Rana el Kaliouby ◽  
...  

2006 ◽  
Vol 15 (4) ◽  
pp. 359-372 ◽  
Author(s):  
Jeremy N Bailenson ◽  
Nick Yee ◽  
Dan Merget ◽  
Ralph Schroeder

The realism of avatars in terms of behavior and form is critical to the development of collaborative virtual environments. In the study we utilized state of the art, real-time face tracking technology to track and render facial expressions unobtrusively in a desktop CVE. Participants in dyads interacted with each other via either a video-conference (high behavioral realism and high form realism), voice only (low behavioral realism and low form realism), or an “emotibox” that rendered the dimensions of facial expressions abstractly in terms of color, shape, and orientation on a rectangular polygon (high behavioral realism and low form realism). Verbal and non-verbal self-disclosure were lowest in the videoconference condition while self-reported copresence and success of transmission and identification of emotions were lowest in the emotibox condition. Previous work demonstrates that avatar realism increases copresence while decreasing self-disclosure. We discuss the possibility of a hybrid realism solution that maintains high copresence without lowering self-disclosure, and the benefits of such an avatar on applications such as distance learning and therapy.


2015 ◽  
Vol 16 (4) ◽  
pp. 272-282 ◽  
Author(s):  
Qi-rong Mao ◽  
Xin-yu Pan ◽  
Yong-zhao Zhan ◽  
Xiang-jun Shen

Sign in / Sign up

Export Citation Format

Share Document