scholarly journals Affective computing with eye-tracking data in the study of the visual perception of architectural spaces

2019 ◽  
Vol 252 ◽  
pp. 03021
Author(s):  
Magdalena Chmielewska ◽  
Mariusz Dzieńkowski ◽  
Jacek Bogucki ◽  
Wojciech Kocki ◽  
Bartłomiej Kwiatkowski ◽  
...  

In the presented study the usefulness of eye-tracking data for classification of architectural spaces as stressful or relaxing was examined. The eye movements and pupillary response data were collected using the eye-tracker from 202 adult volunteers in the laboratory experiment in a well-controlled environment. Twenty features were extracted from the eye-tracking data and after the selection process the features were used in automated binary classification with a variety of machine learning classifiers including neural networks. The results of the classification using eye-tracking data features yielded 68% accuracy score, which can be considered satisfactory. Moreover, statistical analysis showed statistically significant differences in eye activity patterns between visualisations labelled as stressful or relaxing.

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2016 ◽  
Vol 106 (5) ◽  
pp. 309-313 ◽  
Author(s):  
Joanna N. Lahey ◽  
Douglas Oxley

Eye tracking is a technology that tracks eye activity including how long and where a participant is looking. As eye tracking technology has improved and become more affordable its use has expanded. We discuss how to design, implement, and analyze an experiment using this technology to study economic theory. Using our experience fielding an experiment to study hiring decisions we guide the reader through how to choose an eye tracker, concerns with participants and set-up, types of outputs, limitations of eye tracking, data management and data analysis. We conclude with suggestions for combining eye tracking with other measurements.


Author(s):  
Jon W. Carr ◽  
Valentina N. Pescuma ◽  
Michele Furlan ◽  
Maria Ktori ◽  
Davide Crepaldi

AbstractA common problem in eye-tracking research is vertical drift—the progressive displacement of fixation registrations on the vertical axis that results from a gradual loss of eye-tracker calibration over time. This is particularly problematic in experiments that involve the reading of multiline passages, where it is critical that fixations on one line are not erroneously recorded on an adjacent line. Correction is often performed manually by the researcher, but this process is tedious, time-consuming, and prone to error and inconsistency. Various methods have previously been proposed for the automated, post hoc correction of vertical drift in reading data, but these methods vary greatly, not just in terms of the algorithmic principles on which they are based, but also in terms of their availability, documentation, implementation languages, and so forth. Furthermore, these methods have largely been developed in isolation with little attempt to systematically evaluate them, meaning that drift correction techniques are moving forward blindly. We document ten major algorithms, including two that are novel to this paper, and evaluate them using both simulated and natural eye-tracking data. Our results suggest that a method based on dynamic time warping offers great promise, but we also find that some algorithms are better suited than others to particular types of drift phenomena and reading behavior, allowing us to offer evidence-based advice on algorithm selection.


Author(s):  
Lim Jia Zheng Et.al

Eye-tracking technology has become popular recently and widely used in research on emotion recognition since its usability. In this paper, we presented a preliminary investigation on a novelty approach for detecting emotions using eye-tracking data in virtual reality (VR) to classify 4-quadrant of emotions according to russell’scircumplex model of affects. A presentation of 3600 videos is used as the experiment stimuli to evoke the emotions of the user in VR. An add-on eye-tracker within the VR headset is used for the recording and collecting device of eye-tracking data. Fixation data is extracted and chosen as the eye feature used in this investigation. The machine learning classifier is support vector machine (SVM) with radial basis function (RBF) kernel. The best classification accuracy achieved is 69.23%. The findings showed that emotion classification using fixation data has promising results in the prediction accuracy from a four-class random classification.


2020 ◽  
Author(s):  
Jon W Carr ◽  
Valentina Nicole Pescuma ◽  
Michele Furlan ◽  
Maria Ktori ◽  
Davide Crepaldi

A common problem in eye tracking research is vertical drift—the progressive displacement of fixation registrations on the vertical axis that results from a gradual loss of eye tracker calibration over time. This is particularly problematic in experiments that involve the reading of multiline passages, where it is critical that fixations on one line are not erroneously recorded on an adjacent line. Correction is often performed manually by the researcher, but this process is tedious, time-consuming, and prone to error and inconsistency. Various methods have previously been proposed for the automated, post-hoc correction of vertical drift in reading data, but these methods vary greatly, not just in terms of the algorithmic principles on which they are based, but also in terms of their availability, documentation, implementation languages, and so forth. Furthermore, these methods have largely been developed in isolation with little attempt to systematically evaluate them, meaning that drift correction techniques are moving forward blindly. We document ten major algorithms, including two that are novel to this paper, and evaluate them using both simulated and natural eye tracking data. Our results suggest that a method based on dynamic time warping offers great promise, but we also find that some algorithms are better suited than others to particular types of drift phenomena and reading behavior, allowing us to offer evidence-based advice on algorithm selection.


Author(s):  
Виталий Людвиченко ◽  
Vitaliy Lyudvichenko ◽  
Дмитрий Ватолин ◽  
Dmitriy Vatolin

This paper presents a new way of getting high-quality saliency maps for video, using a cheaper alternative to eye-tracking data. We designed a mouse-contingent video viewing system which simulates the viewers’ peripheral vision based on the position of the mouse cursor. The system enables the use of mouse-tracking data recorded from an ordinary computer mouse as an alternative to real gaze fixations recorded by a more expensive eye-tracker. We developed a crowdsourcing system that enables the collection of such mouse-tracking data at large scale. Using the collected mouse-tracking data we showed that it can serve as an approximation of eye-tracking data. Moreover, trying to increase the efficiency of collected mouse-tracking data we proposed a novel deep neural network algorithm that improves the quality of mouse-tracking saliency maps.


Author(s):  
Juni Nurma Sari ◽  
Lukito Edi Nugroho ◽  
Paulus Insap Santosa ◽  
Ridi Ferdiana

E-commerce can be used to increase companies or sellers’ profits. For consumers, e-commerce can help them shop faster. The weakness of e-commerce is that there is too much product information presented in the catalog which in turn makes consumers confused. The solution is by providing product recommendations. As the development of sensor technology, eye tracker can capture user attention when shopping. The user attention was used as data of consumer interest in the product in the form of fixation duration following the Bojko taxonomy. The fixation duration data was processed into product purchase prediction data to know consumers’ desire to buy the products by using Chandon method. Both data could be used as variables to make product recommendations based on eye tracking data. The implementation of the product recommendations based on eye tracking data was an eye tracking experiment at selvahouse.com which sells hijab and women modest wear. The result was a list of products that have similarities to other products. The product recommendation method used was item-to-item collaborative filtering. The novelty of this research is the use of eye tracking data, namely the fixation duration and product purchase prediction data as variables for product recommendations. Product recommendation that produced by eye tracking data can be solution of product recommendation’s problems, namely sparsity and cold start.


Author(s):  
Ali Shahidi Zandi ◽  
Azhar Quddus ◽  
Laura Prest ◽  
Felix J. E. Comeau

Drowsy driving is one of the leading causes of motor vehicle accidents in North America. This paper presents the use of eye tracking data as a non-intrusive measure of driver behavior for detection of drowsiness. Eye tracking data were acquired from 53 subjects in a simulated driving experiment, whereas the simultaneously recorded multichannel electroencephalogram (EEG) signals were used as the baseline. A random forest (RF) and a non-linear support vector machine (SVM) were employed for binary classification of the state of vigilance. Different lengths of eye tracking epoch were selected for feature extraction, and the performance of each classifier was investigated for every epoch length. Results revealed a high accuracy for the RF classifier in the range of 88.37% to 91.18% across all epoch lengths, outperforming the SVM with 77.12% to 82.62% accuracy. A feature analysis approach was presented and top eye tracking features for drowsiness detection were identified. Altogether, this study showed a high correspondence between the extracted eye tracking features and EEG as a physiological measure of vigilance and verified the potential of these features along with a proper classification technique, such as the RF, for non-intrusive long-term assessment of drowsiness in drivers. This research would ultimately lead to development of technologies for real-time assessment of the state of vigilance, providing early warning of fatigue and drowsiness in drivers.


2021 ◽  
Author(s):  
Jasmin L. Walter ◽  
Lucas Essmann ◽  
Sabine U. König ◽  
Peter König

Vision provides the most important sensory information for spatial navigation. Recent technical advances allow new options to conduct more naturalistic experiments in virtual reality (VR) while additionally gather data of the viewing behavior with eye tracking investigations. Here, we propose a method that allows to quantify characteristics of visual behavior by using graph-theoretical measures to abstract eye tracking data recorded in a 3D virtual urban environment. The analysis is based on eye tracking data of 20 participants, who freely explored the virtual city Seahaven for 90 minutes with an immersive VR headset with an inbuild eye tracker. To extract what participants looked at, we defined “gaze” events, from which we created gaze graphs. On these, we applied graph-theoretical measures to reveal the underlying structure of visual attention. Applying graph partitioning, we found that our virtual environment could be treated as one coherent city. To investigate the importance of houses in the city, we applied the node degree centrality measure. Our results revealed that 10 houses had a node degree that exceeded consistently two-sigma distance from the mean node degree of all other houses. The importance of these houses was supported by the hierarchy index, which showed a clear hierarchical structure of the gaze graphs. As these high node degree houses fulfilled several characteristics of landmarks, we named them “gaze-graph-defined landmarks”. Applying the rich club coefficient, we found that these gaze-graph-defined landmarks were preferentially connected to each other and that participants spend the majority of their experiment time in areas where at least two of those houses were visible. Our findings do not only provide new experimental evidence for the development of spatial knowledge, but also establish a new methodology to identify and assess the function of landmarks in spatial navigation based on eye tracking data.


Author(s):  
Karim Fayed ◽  
Birgit Franken ◽  
Kay Berkling

The iRead EU Project has released literacy games for Spanish, German, Greek, and English for L1 and L2 acquisition. In order to understand the impact of these games on reading skills for L1 German pupils, the authors employed an eye-tracking recording of pupils’ readings on a weekly basis as part of an after-school reading club. This work seeks to first understand how to interpret the eye-tracker data for such a study. Five pupils participated in the project and read short texts over the course of five weeks. The resulting data set was extensive enough to perform preliminary analysis on how to use the eye-tracking data to provide information on skill acquisition looking at pupils’ reading accuracy and speed. Given our set-up, we can show that the eye-tracker is accurate enough to measure relative reading speed between long and short vowels for selected 2-syllable words. As a result, eye-tracking data can visualize three different types of beginning readers: memorizers, pattern learners, and those with reading problems.


Sign in / Sign up

Export Citation Format

Share Document