Multimodal Depression Detection: Fusion Analysis of Paralinguistic, Head Pose and Eye Gaze Behaviors

2018 ◽  
Vol 9 (4) ◽  
pp. 478-490 ◽  
Author(s):  
Sharifa Alghowinem ◽  
Roland Goecke ◽  
Michael Wagner ◽  
Julien Epps ◽  
Matthew Hyett ◽  
...  
Author(s):  
Chirag S Indi ◽  
Varun Pritham ◽  
Vasundhara Acharya ◽  
Krishna Prakasha

Examination malpractice is a deliberate wrong doing contrary to official examina-tion rules designed to place a candidate at unfair advantage or disadvantage. The proposed system depicts a new use of technology to identify malpractice in E-Exams which is essential due to growth of online education. The current solu-tions for such a problem either require complete manual labor or have various vulnerabilities that can be exploited by an examinee. The proposed application en-compasses an end-to-end system that assists an examiner/evaluator in deciding whether a student passes an online exam without any probable attempts of mal-practice or cheating in e-exams with the help of visual aids. The system works by categorizing the student’s VFOA (visual focus of attention) data by capturing the head pose estimates and eye gaze estimates using state-of-the-art machine learn-ing techniques. The system only requires the student (test-taker) to have a func-tioning internet connection along with a webcam to transmit the feed. The exam-iner is alerted when the student wavers in his VFOA, from the screen greater than X, a predefined threshold of times. If this threshold X is crossed, the appli-cation will save the data of the person when his VFOA is off the screen and send it to the examiner to be manually checked and marked whether the action per-formed by the student was an attempt at malpractice or just momentary lapse in concentration. The system use a hybrid classifier approach where two different classifiers are used, one when gaze values are being read successfully (which may fail due to various reasons like transmission quality or glare from his specta-cles), the model falls back to the default classifier which only reads the head pose values to classify the attention metric, which is used to map the student’s VFOA to check the likelihood of malpractice. The model has achieved an accuracy of 96.04 percent in classifying the attention metric.


Author(s):  
Reza Shoja Ghiass ◽  
Denis Laurendeau

This work addresses the problem of automatic head pose estimation and its application in 3D gaze estimation using low quality RGB--D sensors without any subject cooperation or manual intervention. The previous works on 3D head pose estimation using RGB--D sensors require either an offline step for supervised learning or 3D head model construction which may require manual intervention or subject cooperation for complete head model reconstruction. In this paper, we propose a 3D pose estimator based on low quality depth data, which is not limited by any of the aforementioned steps. Instead, the proposed technique relies on modeling the subject's face in 3--D rather than the complete head, which in turn, relaxes all of the constraints with the previous works. The proposed method is robust, highly accurate and fully automatic. Moreover, it does not need any offline step. Unlike some of the previous works, the method only uses depth data for pose estimation. The experimental results on the Biwi head pose database confirm the efficiency of our algorithm in handling large pose variations and partial occlusion. We also evaluate the performance of our algorithm on IDIAP database for 3D head pose and eye gaze estimation.


2008 ◽  
Vol 41 (3) ◽  
pp. 469-493 ◽  
Author(s):  
Stylianos Asteriadis ◽  
Paraskevi Tzouveli ◽  
Kostas Karpouzis ◽  
Stefanos Kollias

2021 ◽  
Vol 28 (1) ◽  
pp. 1-44
Author(s):  
Florian Mathis ◽  
John H. Williamson ◽  
Kami Vaniea ◽  
Mohamed Khamis

There is a growing need for usable and secure authentication in immersive virtual reality (VR). Established concepts (e.g., 2D authentication schemes) are vulnerable to observation attacks, and most alternatives are relatively slow. We present RubikAuth, an authentication scheme for VR where users authenticate quickly and secure by selecting digits from a virtual 3D cube that leverages coordinated 3D manipulation and pointing. We report on results from three studies comparing how pointing using eye gaze, head pose, and controller tapping impact RubikAuth’s usability, memorability, and observation resistance under three realistic threat models. We found that entering a four-symbol RubikAuth password is fast: 1.69–3.5 s using controller tapping, 2.35–4.68 s using head pose and 2.39 –4.92 s using eye gaze, and highly resilient to observations: 96–99.55% of observation attacks were unsuccessful. RubikAuth also has a large theoretical password space: 45 n for an n -symbols password. Our work underlines the importance of considering novel but realistic threat models beyond standard one-time attacks to fully assess the observation-resistance of authentication schemes. We conclude with an in-depth discussion of authentication systems for VR and outline five learned lessons for designing and evaluating authentication schemes.


Sign in / Sign up

Export Citation Format

Share Document