facial feature tracking
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 0)

H-INDEX

15
(FIVE YEARS 0)

2017 ◽  
Vol 36 (8) ◽  
pp. 934-941 ◽  
Author(s):  
Kathleen H. Miles ◽  
Bradley Clark ◽  
Julien D. Périard ◽  
Roland Goecke ◽  
Kevin G. Thompson

2017 ◽  
pp. 67-96
Author(s):  
Chamin Morikawa ◽  
Michael J. Lyons

Interaction methods based on computer-vision hold the potential to become the next powerful technology to support breakthroughs in the field of human-computer interaction. Non-invasive vision-based techniques permit unconventional interaction methods to be considered, including use of movements of the face and head for intentional gestural control of computer systems. Facial gesture interfaces open new possibilities for assistive input technologies. This chapter gives an overview of research aimed at developing vision-based head and face-tracking interfaces. This work has important implications for future assistive input devices. To illustrate this concretely the authors describe work from their own research in which they developed two vision-based facial feature tracking algorithms for human computer interaction and assistive input. Evaluation forms a critical component of this research and the authors provide examples of new quantitative evaluation tasks as well as the use of model real-world applications for the qualitative evaluation of new interaction styles.


2017 ◽  
Vol 53 ◽  
pp. 34-44 ◽  
Author(s):  
Md. Nazrul Islam ◽  
Manjeevan Seera ◽  
Chu Kiong Loo

2016 ◽  
Vol 45 (3) ◽  
pp. 887-911 ◽  
Author(s):  
Md. Nazrul Islam ◽  
Chu Kiong Loo ◽  
Manjeevan Seera

2015 ◽  
Vol 15 (3) ◽  
pp. 127-139
Author(s):  
Qingxiang Wang ◽  
Yanhong Yu

Abstract Facial features tracking is widely used in face recognition, gesture, expression analysis, etc. AAM (Active Appearance Model) is one of the powerful methods for objects feature localization. Nevertheless, AAM still suffers from a few drawbacks, such as the view angle change problem. We present a method to solve it by using the depth data acquired from Kinect. We use the depth data to get the head pose information and RGB data to match the AAM result. We establish an approximate facial 3D gird model and then initialize the subsequent frames with this model and head pose information. To avoid the local extremum, we divide the model into several parts by the poses and match the facial features with the closest model. The experimental results show improvement of AAM performance when rotating the head.


Sign in / Sign up

Export Citation Format

Share Document