scholarly journals Implicit Calibration Using Probable Fixation Targets

Author(s):  
Pawel Kasprowski ◽  
Katarzyna Harezlak ◽  
Przemyslaw Skurowski

Proper calibration of eye movement signal registered by an eye tracker seems to be one of the main challenges in popularizing eye trackers as yet another user input device. Classic calibration methods taking time and imposing unnatural behavior of users have to be replaced by intelligent methods that are able to calibrate the signal without conscious cooperation with users. Such an implicit calibration requires some knowledge about the stimulus a person is looking at and takes into account this information to predict probable gaze targets. The paper describes one of the possible methods to perform implicit calibration: it starts with finding probable fixation targets (PFTs), then uses these targets to build a mapping - probable gaze path. Various possible algorithms that may be used for finding PFTs and mapping are presented in the paper and errors are calculated utilizing two datasets registered with two different types of eye trackers. The results show that although for now the implicit calibration provides results worse than the classic one, it may be comparable with it and sufficient for some applications.

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 216 ◽  
Author(s):  
Pawel Kasprowski ◽  
Katarzyna Harȩżlak ◽  
Przemysław Skurowski

Proper calibration of eye movement signal registered by an eye tracker seems to be one of the main challenges in popularizing eye trackers as yet another user-input device. Classic calibration methods taking time and imposing unnatural behavior on eyes must be replaced by intelligent methods that are able to calibrate the signal without conscious cooperation by the user. Such an implicit calibration requires some knowledge about the stimulus a user is looking at and takes into account this information to predict probable gaze targets. This paper describes a possible method to perform implicit calibration: it starts with finding probable fixation targets (PFTs), then it uses these targets to build a mapping-probable gaze path. Various algorithms that may be used for finding PFTs and mappings are presented in the paper and errors are calculated using two datasets registered with two different types of eye trackers. The results show that although for now the implicit calibration provides results worse than the classic one, it may be comparable with it and sufficient for some applications.


Healthcare ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Chong-Bin Tsai ◽  
Wei-Yu Hung ◽  
Wei-Yen Hsu

Optokinetic nystagmus (OKN) is an involuntary eye movement induced by motion of a large proportion of the visual field. It consists of a “slow phase (SP)” with eye movements in the same direction as the movement of the pattern and a “fast phase (FP)” with saccadic eye movements in the opposite direction. Study of OKN can reveal valuable information in ophthalmology, neurology and psychology. However, the current commercially available high-resolution and research-grade eye tracker is usually expensive. Methods & Results: We developed a novel fast and effective system combined with a low-cost eye tracking device to accurately quantitatively measure OKN eye movement. Conclusions: The experimental results indicate that the proposed method achieves fast and promising results in comparisons with several traditional approaches.


2014 ◽  
Vol 607 ◽  
pp. 664-668
Author(s):  
Zhi Hui Liu ◽  
Sheng Ze Wang ◽  
Qiong Shen ◽  
Jia Jun Feng

This study investigates the characteristics of eye movements by operating flat knitting machine. For the objective evaluation purpose of the flat knitting machine operation interface, we arrange participants finish operation tasks on the interface, then use eye tracker to analyze and evaluate the layout design. Through testing of the different layout designs, we get fixation sequences, the count of fixation, heat maps, and fixation length. The results showed that the layout design could significantly affect the eye-movement, especially the fixation sequences and the heat maps, the count of fixation and fixation length are always impacted by operation tasks. Overall, data obtained from eye movements can not only be used to evaluate the operation interface, but also significantly enhance the layout design of the flat knitting machine.


Author(s):  
Shannon K. T. Bailey ◽  
Daphne E. Whitmer ◽  
Bradford L. Schroeder ◽  
Valerie K. Sims

Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”


Author(s):  
James Kim

The purpose of this study was to examine factors that influence how people look at objects they will have to act upon while watching others interact with them first. We investigated whether including different types of task-relevant information into an observational learning task would result in participants adapting their gaze towards an object with more task-relevant information. The participant watched an actor simultaneously lift and replace two objects with two hands then was cued to lift one of the two objects. The objects had the potential to change weight between each trial. In our cue condition, participants were cued to lift one of the objects every single time. In our object condition, the participants were cued equally to act on both objects; however, the weights of only one of the objects would have the potential to change. The hypothesis in the cue condition was that the participant would look significantly more at the object being cued. The hypothesis for the object condition was that the participant would look significantly more (i.e. adapt their gaze) at the object changing weight. The rationale behind this is that participants will learn to allocate their gaze significantly more towards that object so they can gain information about its properties (i.e. weight change). Pending results will indicate whether or not this occurred, and has implications for understanding eye movement sequences in visually guided behaviour tasks. The outcome of this study also has implications for the mechanisms of eye gaze with respect to social learning tasks. 


2019 ◽  
Vol 10 (1) ◽  
pp. 14-18
Author(s):  
Lingfeng Wang

Abstract As a means of marketing communication, advertisements have been applied in the course of enterprise operation. However, in practice, there are many problems with the implementation effect of specific advertisements, so the test and evaluation of the effectiveness of advertising have important practical and theoretical significance. Therefore, this paper uses the EEG and eye movement technology to study the EEG change and eye movement of subjects when viewing advertisements and to conduct processing and analysis of the collected EEG and eye movement indexes. It is expected to provide advertisers with valuable advertising strategies based on the analysis results of EEG change and eye movement experiment.


2012 ◽  
Vol 157-158 ◽  
pp. 410-414 ◽  
Author(s):  
Ji Feng Xu ◽  
Han Ning Zhang

The relationship between modern furniture color image and eye tracking has been of interest to academics and practitioners for many years. We propose and develop a new view and method exploring these connections, utilizing data from a survey of 31 testees’ eye tracking observed value. Using Tobii X120 eye tracker to analyze eye movement to furniture samples in different hue and tones colors, we highlight the relative importance of the effect of furniture color on human vision system and show that the connections between furniture color features with color image.


Author(s):  
Sudheer Bayanker ◽  
Joshua D. Summers ◽  
Anand K. Gramopadhye

This paper presents an experimental investigation into input suitability for human-computer interaction during computer aided design operations. Specifically, three types of operations, synthesis, interrogation, and modification, are examined with respect to three different types of user interfaces, mouse, direct tablet, and indirect tablet. The study, using undergraduate student participants in an introductory engineering graphics course, demonstrates that the mouse performs the highest across the dimensions of completion time and number of errors. However, the direct tablet, using a pen like device directly on the visualization screen, shows promise.


Perception ◽  
2019 ◽  
Vol 48 (9) ◽  
pp. 835-849 ◽  
Author(s):  
Rongjuan Zhu ◽  
Xuqun You ◽  
Shuoqiu Gan ◽  
Jinwei Wang

Recently, it has been proposed that solving addition and subtraction problems can evoke horizontal shifts of spatial attention. However, prior to this study, it remained unclear whether orienting shifts of spatial attention relied on actual arithmetic processes (i.e., the activated magnitude) or the semantic spatial association of the operator. In this study, spatial–arithmetic associations were explored through three experiments using an eye tracker, which attempted to investigate the mechanism of those associations. Experiment 1 replicated spatial–arithmetic associations in addition and subtraction problems. Experiments 2 and 3 selected zero as the operand to investigate whether these arithmetic problems could induce shifts of spatial attention. Experiment 2 indicated that addition and subtraction problems (zero as the second operand, i.e., 2 + 0) do not induce shifts of spatial attention. Experiment 3 showed that addition and subtraction arithmetic (zero as the first operand, i.e., 0 + 2) do facilitate rightward and leftward eye movement, respectively. This indicates that the operator alone does not induce horizontal eye movement. However, our findings support the idea that solving addition and subtraction problems is associated with horizontal shifts of spatial attention.


Sign in / Sign up

Export Citation Format

Share Document