Using eye gaze, head pose, and facial expression for personalized non-player character interaction

Author(s):  
Michael Reale ◽  
Peng Liu ◽  
Lijun Yin
Keyword(s):  
Eye Gaze ◽  
Author(s):  
Zhongling Pi ◽  
Yi Zhang ◽  
Fangfang Zhu ◽  
Louqi Chen ◽  
Xin Guo ◽  
...  

2018 ◽  
Vol 9 (4) ◽  
pp. 478-490 ◽  
Author(s):  
Sharifa Alghowinem ◽  
Roland Goecke ◽  
Michael Wagner ◽  
Julien Epps ◽  
Matthew Hyett ◽  
...  

Author(s):  
Chirag S Indi ◽  
Varun Pritham ◽  
Vasundhara Acharya ◽  
Krishna Prakasha

Examination malpractice is a deliberate wrong doing contrary to official examina-tion rules designed to place a candidate at unfair advantage or disadvantage. The proposed system depicts a new use of technology to identify malpractice in E-Exams which is essential due to growth of online education. The current solu-tions for such a problem either require complete manual labor or have various vulnerabilities that can be exploited by an examinee. The proposed application en-compasses an end-to-end system that assists an examiner/evaluator in deciding whether a student passes an online exam without any probable attempts of mal-practice or cheating in e-exams with the help of visual aids. The system works by categorizing the student’s VFOA (visual focus of attention) data by capturing the head pose estimates and eye gaze estimates using state-of-the-art machine learn-ing techniques. The system only requires the student (test-taker) to have a func-tioning internet connection along with a webcam to transmit the feed. The exam-iner is alerted when the student wavers in his VFOA, from the screen greater than X, a predefined threshold of times. If this threshold X is crossed, the appli-cation will save the data of the person when his VFOA is off the screen and send it to the examiner to be manually checked and marked whether the action per-formed by the student was an attempt at malpractice or just momentary lapse in concentration. The system use a hybrid classifier approach where two different classifiers are used, one when gaze values are being read successfully (which may fail due to various reasons like transmission quality or glare from his specta-cles), the model falls back to the default classifier which only reads the head pose values to classify the attention metric, which is used to map the student’s VFOA to check the likelihood of malpractice. The model has achieved an accuracy of 96.04 percent in classifying the attention metric.


Author(s):  
Reza Shoja Ghiass ◽  
Denis Laurendeau

This work addresses the problem of automatic head pose estimation and its application in 3D gaze estimation using low quality RGB--D sensors without any subject cooperation or manual intervention. The previous works on 3D head pose estimation using RGB--D sensors require either an offline step for supervised learning or 3D head model construction which may require manual intervention or subject cooperation for complete head model reconstruction. In this paper, we propose a 3D pose estimator based on low quality depth data, which is not limited by any of the aforementioned steps. Instead, the proposed technique relies on modeling the subject's face in 3--D rather than the complete head, which in turn, relaxes all of the constraints with the previous works. The proposed method is robust, highly accurate and fully automatic. Moreover, it does not need any offline step. Unlike some of the previous works, the method only uses depth data for pose estimation. The experimental results on the Biwi head pose database confirm the efficiency of our algorithm in handling large pose variations and partial occlusion. We also evaluate the performance of our algorithm on IDIAP database for 3D head pose and eye gaze estimation.


Sign in / Sign up

Export Citation Format

Share Document