scholarly journals iMap4D: an Open Source Toolbox for Statistical Fixation Mapping of Eye-Tracking Data in Virtual Reality

2019 ◽  
Vol 19 (10) ◽  
pp. 127c
Author(s):  
Valentina Ticcinelli ◽  
Peter De Lissa ◽  
Denis Lalanne ◽  
Sebastien Miellet ◽  
Roberto Caldara
Author(s):  
Abner Cardoso da Silva ◽  
Cesar A. Sierra-Franco ◽  
Greis Francy M. Silva-Calpa ◽  
Felipe Carvalho ◽  
Alberto Barbosa Raposo

2019 ◽  
Vol 64 (2) ◽  
pp. 286-308
Author(s):  
El Mehdi Ibourk ◽  
Amer Al-Adwan

Abstract The recent years have witnessed the emergence of new approaches in filmmaking including virtual reality (VR), which is meant to achieve an immersive viewing experience through advanced electronic devices, such as VR headsets. The VR industry is oriented toward developing content mainly in English and Japanese, leaving vast audiences unable to understand the original content or even enjoy this novel technology due to language barriers. This paper examines the impact of the subtitles on the viewing experience and behaviour of eight Arab participants in understanding the content in Arabic through eye tracking technology. It also provides an insight on the mechanism of watching a VR 360-degree documentary and the factors that lead viewers to favour one subtitling mode over the other in the spherical environment. For this end, a case study was designed to produce 120-degree subtitles and Follow Head Immediately subtitles, followed by the projection of the subtitled documentary through an eye tracking VR headset. The analysis of the eye tracking data is combined with post-viewing interviews in order to better understand the viewing experience of the Arab audience, their cognitive reception and the reasons leading to favour one type of subtitles over the other.


2016 ◽  
Vol 9 (4) ◽  
Author(s):  
Francesco Di Nocera ◽  
Claudio Capobianco ◽  
Simon Mastrangelo

This short paper describes an update of A Simple Tool For Examining Fixations (ASTEF) developed for facilitating the examination of eye-tracking data and for computing a spatial statistics algorithm that has been validated as a measure of mental workload (namely, the Nearest Neighbor Index: NNI). The code is based on Matlab® 2013a and is currently distributed on the web as an open-source project. This implementation of ASTEF got rid of many functionalities included in the previous version that are not needed anymore considering the large availability of commercial and open-source software solutions for eye-tracking. That makes it very easy to compute the NNI on eye-tracking data without the hassle of learning complicated tools. The software also features an export function for creating the time series of the NNI values computed on each minute of the recording. This feature is crucial given that the spatial distribution of fixations must be used to test hypotheses about the time course of mental load.


2019 ◽  
Vol 13 (03) ◽  
pp. 329-341 ◽  
Author(s):  
Brendan John ◽  
Pallavi Raiturkar ◽  
Olivier Le Meur ◽  
Eakta Jain

Modeling and visualization of user attention in Virtual Reality (VR) is important for many applications, such as gaze prediction, robotics, retargeting, video compression, and rendering. Several methods have been proposed to model eye tracking data as saliency maps. We benchmark the performance of four such methods for 360∘ images. We provide a comprehensive analysis and implementations of these methods to assist researchers and practitioners. Finally, we make recommendations based on our benchmark analyses and the ease of implementation.


2021 ◽  
Author(s):  
Tim Schneegans ◽  
Matthew D. Bachman ◽  
Scott A. Huettel ◽  
Hauke Heekeren

Recent developments of open-source online eye-tracking algorithms suggests that they may be ready for use in online studies, thereby overcoming the limitations of in-lab eye-tracking studies. However, to date there have been limited tests of the efficacy of online eye-tracking for decision-making and cognitive psychology. In this online study, we explore the potential and the limitations of online eye-tracking tools for decision-making research using the webcam-based open-source library Webgazer (Papoutsaki et al., 2016). Our study had two aims. For our first aim we assessed different variables that might affect the quality of eye-tracking data. In our experiment (N = 210) we measured a within-subjects variable of adding a provisional chin rest and a between-subjects variable of corrected vs uncorrected vision. Contrary to our hypotheses, we found that the chin rest had a negative effect on data quality. In accordance with out hypotheses, we found lower quality data in participants who wore glasses. Other influence factors are discussed, such as the frame rate. For our second aim (N = 44) we attempted to replicate a decision-making paradigm where eye-tracking data was acquired using offline means (Amasino et al., 2019). We found some relations between choice behavior and eye-tracking measures, such as the last fixation and the distribution of gaze points at the moment right before the choice. However, several effects could not be reproduced, such as the overall distribution of gaze points or dynamic search strategies. Therefore, our hypotheses only find partial evidence. This study gives practical insights for the feasibility of online eye-tacking for decision making research as well as researchers from other disciplines.


2021 ◽  
Vol 192 ◽  
pp. 2568-2575
Author(s):  
Leszek Bonikowski ◽  
Dawid Gruszczyński ◽  
Jacek Matulewski

2016 ◽  
Vol 9 (1) ◽  
pp. 131-144
Author(s):  
P.A. Marmalyuk ◽  
G.A. Yuryev ◽  
A.V. Zhegallo ◽  
B.Yu. Polyakov ◽  
A.S. Panfilova

This article is devoted to the description of a free, extensible and open source software system designed for eye tracking data analysis. Authors of this article examine not only the main methods and functions of the system core that address gaze data import, data analysis (filtering, smoothing, oculomotor events detection, estimation of events’ characteristics and others) and visualization, but also scheduled improvements of system’s functional features


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


2015 ◽  
Vol 23 (9) ◽  
pp. 1508
Author(s):  
Qiandong WANG ◽  
Qinggong LI ◽  
Kaikai CHEN ◽  
Genyue FU

Sign in / Sign up

Export Citation Format

Share Document