Evaluation of a Mobile Head-Tracker Interface for Accessibility

Author(s):  
Maria Francesca Roig-Maimó ◽  
Cristina Manresa-Yee ◽  
Javier Varona ◽  
I. Scott MacKenzie
Keyword(s):  
2007 ◽  
Author(s):  
Phil Surman ◽  
Ian Sexton ◽  
Klaus Hopf ◽  
Richard Bates ◽  
Wing Kai Lee ◽  
...  

2019 ◽  
Vol 6 ◽  
pp. 205566831984130
Author(s):  
Nahal Norouzi ◽  
Luke Bölling ◽  
Gerd Bruder ◽  
Greg Welch

Introduction: A large body of research in the field of virtual reality is focused on making user interfaces more natural and intuitive by leveraging natural body movements to explore a virtual environment. For example, head-tracked user interfaces allow users to naturally look around a virtual space by moving their head. However, such approaches may not be appropriate for users with temporary or permanent limitations of their head movement. Methods: In this paper, we present techniques that allow these users to get virtual benefits from a reduced range of physical movements. Specifically, we describe two techniques that augment virtual rotations relative to physical movement thresholds. Results: We describe how each of the two techniques can be implemented with either a head tracker or an eye tracker, e.g. in cases when no physical head rotations are possible. Conclusions: We discuss their differences and limitations and we provide guidelines for the practical use of such augmented user interfaces.


2021 ◽  
Vol 13 (1) ◽  
pp. 47-58
Author(s):  
P. P. Debnath ◽  
M. G. Rashed ◽  
D. Das ◽  
M. R. Basar

The paper presents an approach to detect and control the focus of attention of the suspect using his/her eye gaze and head movement direction to build up an automatic interrogation system- specially to detect lies. To this point, we classified interrogation conversation into different criteria and identified the fatal ones. At first, we conducted psychological experiments on the sampled population to detect the different parameters connected with various symptoms when the suspect tells lies and build our knowledgebase with the results. This knowledgebase helps the system to make strategic decisions and to optimize accuracy. A monitoring camera captures continuous interrogation and feeds the frames to our proposed system. 3D head tracker is used to track the head from image and Active Shape Model (ASM) is utilized to localize face points. Vector Field of Image Gradient (VFIG) is calculated to track the eyeball and its rotation within the eye area. Random eye and head movement, change of eyebrow at the critical level of questionnaire provide us the possibility of detecting lies. Finally, experiments are conducted in a controlled environment to validate our psychological findings. 


Author(s):  
Ten-Kai Kuo ◽  
Li-Chen Fu ◽  
Jong-Hann Jean ◽  
Pei-Ying Chen ◽  
Yu-Ming Chan

Sign in / Sign up

Export Citation Format

Share Document