scholarly journals Product Recommendation Based on Eye Tracking Data Using Fixation Duration

Author(s):  
Juni Nurma Sari ◽  
Lukito Edi Nugroho ◽  
Paulus Insap Santosa ◽  
Ridi Ferdiana

E-commerce can be used to increase companies or sellers’ profits. For consumers, e-commerce can help them shop faster. The weakness of e-commerce is that there is too much product information presented in the catalog which in turn makes consumers confused. The solution is by providing product recommendations. As the development of sensor technology, eye tracker can capture user attention when shopping. The user attention was used as data of consumer interest in the product in the form of fixation duration following the Bojko taxonomy. The fixation duration data was processed into product purchase prediction data to know consumers’ desire to buy the products by using Chandon method. Both data could be used as variables to make product recommendations based on eye tracking data. The implementation of the product recommendations based on eye tracking data was an eye tracking experiment at selvahouse.com which sells hijab and women modest wear. The result was a list of products that have similarities to other products. The product recommendation method used was item-to-item collaborative filtering. The novelty of this research is the use of eye tracking data, namely the fixation duration and product purchase prediction data as variables for product recommendations. Product recommendation that produced by eye tracking data can be solution of product recommendation’s problems, namely sparsity and cold start.

2016 ◽  
Vol 106 (5) ◽  
pp. 309-313 ◽  
Author(s):  
Joanna N. Lahey ◽  
Douglas Oxley

Eye tracking is a technology that tracks eye activity including how long and where a participant is looking. As eye tracking technology has improved and become more affordable its use has expanded. We discuss how to design, implement, and analyze an experiment using this technology to study economic theory. Using our experience fielding an experiment to study hiring decisions we guide the reader through how to choose an eye tracker, concerns with participants and set-up, types of outputs, limitations of eye tracking, data management and data analysis. We conclude with suggestions for combining eye tracking with other measurements.


Author(s):  
Jon W. Carr ◽  
Valentina N. Pescuma ◽  
Michele Furlan ◽  
Maria Ktori ◽  
Davide Crepaldi

AbstractA common problem in eye-tracking research is vertical drift—the progressive displacement of fixation registrations on the vertical axis that results from a gradual loss of eye-tracker calibration over time. This is particularly problematic in experiments that involve the reading of multiline passages, where it is critical that fixations on one line are not erroneously recorded on an adjacent line. Correction is often performed manually by the researcher, but this process is tedious, time-consuming, and prone to error and inconsistency. Various methods have previously been proposed for the automated, post hoc correction of vertical drift in reading data, but these methods vary greatly, not just in terms of the algorithmic principles on which they are based, but also in terms of their availability, documentation, implementation languages, and so forth. Furthermore, these methods have largely been developed in isolation with little attempt to systematically evaluate them, meaning that drift correction techniques are moving forward blindly. We document ten major algorithms, including two that are novel to this paper, and evaluate them using both simulated and natural eye-tracking data. Our results suggest that a method based on dynamic time warping offers great promise, but we also find that some algorithms are better suited than others to particular types of drift phenomena and reading behavior, allowing us to offer evidence-based advice on algorithm selection.


Author(s):  
Lim Jia Zheng Et.al

Eye-tracking technology has become popular recently and widely used in research on emotion recognition since its usability. In this paper, we presented a preliminary investigation on a novelty approach for detecting emotions using eye-tracking data in virtual reality (VR) to classify 4-quadrant of emotions according to russell’scircumplex model of affects. A presentation of 3600 videos is used as the experiment stimuli to evoke the emotions of the user in VR. An add-on eye-tracker within the VR headset is used for the recording and collecting device of eye-tracking data. Fixation data is extracted and chosen as the eye feature used in this investigation. The machine learning classifier is support vector machine (SVM) with radial basis function (RBF) kernel. The best classification accuracy achieved is 69.23%. The findings showed that emotion classification using fixation data has promising results in the prediction accuracy from a four-class random classification.


2020 ◽  
Author(s):  
Jon W Carr ◽  
Valentina Nicole Pescuma ◽  
Michele Furlan ◽  
Maria Ktori ◽  
Davide Crepaldi

A common problem in eye tracking research is vertical drift—the progressive displacement of fixation registrations on the vertical axis that results from a gradual loss of eye tracker calibration over time. This is particularly problematic in experiments that involve the reading of multiline passages, where it is critical that fixations on one line are not erroneously recorded on an adjacent line. Correction is often performed manually by the researcher, but this process is tedious, time-consuming, and prone to error and inconsistency. Various methods have previously been proposed for the automated, post-hoc correction of vertical drift in reading data, but these methods vary greatly, not just in terms of the algorithmic principles on which they are based, but also in terms of their availability, documentation, implementation languages, and so forth. Furthermore, these methods have largely been developed in isolation with little attempt to systematically evaluate them, meaning that drift correction techniques are moving forward blindly. We document ten major algorithms, including two that are novel to this paper, and evaluate them using both simulated and natural eye tracking data. Our results suggest that a method based on dynamic time warping offers great promise, but we also find that some algorithms are better suited than others to particular types of drift phenomena and reading behavior, allowing us to offer evidence-based advice on algorithm selection.


Author(s):  
Виталий Людвиченко ◽  
Vitaliy Lyudvichenko ◽  
Дмитрий Ватолин ◽  
Dmitriy Vatolin

This paper presents a new way of getting high-quality saliency maps for video, using a cheaper alternative to eye-tracking data. We designed a mouse-contingent video viewing system which simulates the viewers’ peripheral vision based on the position of the mouse cursor. The system enables the use of mouse-tracking data recorded from an ordinary computer mouse as an alternative to real gaze fixations recorded by a more expensive eye-tracker. We developed a crowdsourcing system that enables the collection of such mouse-tracking data at large scale. Using the collected mouse-tracking data we showed that it can serve as an approximation of eye-tracking data. Moreover, trying to increase the efficiency of collected mouse-tracking data we proposed a novel deep neural network algorithm that improves the quality of mouse-tracking saliency maps.


Author(s):  
Nahumi Nugrahaningsih ◽  
Marco Porta ◽  
Aleksandra Klasnja-Milicevic

Adapting the presentation of learning material to the specific student?s characteristics is useful to improve the overall learning experience and learning styles can play an important role to this purpose. In this paper, we investigate the possibility to distinguish between Visual and Verbal learning styles from gaze data. In an experiment involving first year students of an engineering faculty, content regarding the basics of programming was presented in both text and graphic form, and participants? gaze data was recorded by means of an eye tracker. Three metrics were selected to characterize the user?s gaze behavior, namely, percentage of fixation duration, percentage of fixations, and average fixation duration. Percentages were calculated on ten intervals into which each participant?s interaction time was subdivided, and this allowed us to perform timebased assessments. The obtained results showed a significant relation between gaze data and Visual/Verbal learning styles for an information arrangement where the same concept is presented in graphical format on the left and in text format on the right. We think that this study can provide a useful contribution to learning styles research carried out exploiting eye tracking technology, as it is characterized by unique traits that cannot be found in similar investigations.


2019 ◽  
Vol 13 (03) ◽  
pp. 329-341 ◽  
Author(s):  
Brendan John ◽  
Pallavi Raiturkar ◽  
Olivier Le Meur ◽  
Eakta Jain

Modeling and visualization of user attention in Virtual Reality (VR) is important for many applications, such as gaze prediction, robotics, retargeting, video compression, and rendering. Several methods have been proposed to model eye tracking data as saliency maps. We benchmark the performance of four such methods for 360∘ images. We provide a comprehensive analysis and implementations of these methods to assist researchers and practitioners. Finally, we make recommendations based on our benchmark analyses and the ease of implementation.


2021 ◽  
Author(s):  
Jasmin L. Walter ◽  
Lucas Essmann ◽  
Sabine U. König ◽  
Peter König

Vision provides the most important sensory information for spatial navigation. Recent technical advances allow new options to conduct more naturalistic experiments in virtual reality (VR) while additionally gather data of the viewing behavior with eye tracking investigations. Here, we propose a method that allows to quantify characteristics of visual behavior by using graph-theoretical measures to abstract eye tracking data recorded in a 3D virtual urban environment. The analysis is based on eye tracking data of 20 participants, who freely explored the virtual city Seahaven for 90 minutes with an immersive VR headset with an inbuild eye tracker. To extract what participants looked at, we defined “gaze” events, from which we created gaze graphs. On these, we applied graph-theoretical measures to reveal the underlying structure of visual attention. Applying graph partitioning, we found that our virtual environment could be treated as one coherent city. To investigate the importance of houses in the city, we applied the node degree centrality measure. Our results revealed that 10 houses had a node degree that exceeded consistently two-sigma distance from the mean node degree of all other houses. The importance of these houses was supported by the hierarchy index, which showed a clear hierarchical structure of the gaze graphs. As these high node degree houses fulfilled several characteristics of landmarks, we named them “gaze-graph-defined landmarks”. Applying the rich club coefficient, we found that these gaze-graph-defined landmarks were preferentially connected to each other and that participants spend the majority of their experiment time in areas where at least two of those houses were visible. Our findings do not only provide new experimental evidence for the development of spatial knowledge, but also establish a new methodology to identify and assess the function of landmarks in spatial navigation based on eye tracking data.


Author(s):  
Karim Fayed ◽  
Birgit Franken ◽  
Kay Berkling

The iRead EU Project has released literacy games for Spanish, German, Greek, and English for L1 and L2 acquisition. In order to understand the impact of these games on reading skills for L1 German pupils, the authors employed an eye-tracking recording of pupils’ readings on a weekly basis as part of an after-school reading club. This work seeks to first understand how to interpret the eye-tracker data for such a study. Five pupils participated in the project and read short texts over the course of five weeks. The resulting data set was extensive enough to perform preliminary analysis on how to use the eye-tracking data to provide information on skill acquisition looking at pupils’ reading accuracy and speed. Given our set-up, we can show that the eye-tracker is accurate enough to measure relative reading speed between long and short vowels for selected 2-syllable words. As a result, eye-tracking data can visualize three different types of beginning readers: memorizers, pattern learners, and those with reading problems.


2018 ◽  
Vol 7 (4.44) ◽  
pp. 137
Author(s):  
Hanafi . ◽  
Nanna Suryana ◽  
Abd Samad Hasan Basari

Online shopping needs a computer machine to serve product information sale for customer or buyer candidate. Relevant information served by ecommerce system famous called recommender system. The successful to applied, it will have impact to increase of marketing target achievement. The character of information served by recommender system have to be special, personalized, relevant and fit according customer profiling. There are four kind of recommender system model, however there is one model that was successful to be applied in real ecommerce industry that popular named collaborative filtering. Collaborative filtering approach need a record users or customers activity in the past to generate recommendation for example rating record, purchasing record, testimony about product.  The majority collaborative filtering approaches rely on rating as fundamental computation to calculate product recommendation. However, just a little number of consumers who willing give rating for products less than a percent, according to several convince datasets such MovieLens. This problem causes of sparse product rating that will impact to product recommendation accuracy level. Sometime, in extreme condition, it is impossible to generate product recommendation. Several efforts have been conducting to handle product sparse rating, however they fail to generate product recommendation accurately when face extreme sparse data, such as matrix factorization family include SVD, NMF, SVD++. This research aims to develop a model to handle users sparse rating involving deep SDAE. One of the efforts to produce better output in handling this data sparse, our strategy is to imputing missing value by statistical method so that the input in SDAE is closer to the feasibility of data that is not too sparse. According to our experiment involve deep learning, TensorFlow, MovieLens datasets, evaluation method by root mean square error (RMSE), our approach involves reducing input missing value could address users sparse rating and increase robustness over several existing approach.  


Sign in / Sign up

Export Citation Format

Share Document