scholarly journals Multivariate mapping for experienced users: comparing extrinsic and intrinsic maps with univariate maps

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jolanta Korycka-Skorupa ◽  
Izabela Gołębiowska

Abstract Multivariate mapping is a technique in which multivariate data are encoded into a single map. A variety of design solutions for multivariate mapping refers to the number of phenomena mapped, the map type, and the visual variables applied. Unlike other authors who have mainly evaluated bivariate maps, in our empirical study we compared three solutions when mapping four variables: two types of multivariate maps (intrinsic and extrinsic) and a simple univariate alternative (serving as a baseline). We analysed usability performance metrics (answer time, answer accuracy, subjective rating of task difficulty) and eye-tracking data. The results suggested that experts used all the tested maps with similar results for answer time and accuracy, even when using four-variable intrinsic maps, which is considered to be a challenging solution. However, eye-tracking data provided more nuances in relation to the difference in cognitive effort evoked by the tested maps across task types.

2020 ◽  
Author(s):  
Zezhong Lv ◽  
Qing Xu ◽  
Klaus Schoeffmann ◽  
Simon Parkinson

AbstractVisual scanning plays an important role in sampling visual information from the surrounding environments for a lot of everyday sensorimotor tasks, such as walking and car driving. In this paper, we consider the problem of visual scanning mechanism underpinning sensorimotor tasks in 3D dynamic environments. We exploit the use of eye tracking data as a behaviometric, for indicating the visuo-motor behavioral measures in the context of virtual driving. A new metric of visual scanning efficiency (VSE), which is defined as a mathematical divergence between a fixation distribution and a distribution of optical flows induced by fixations, is proposed by making use of a widely-known information theoretic tool, namely the square root of Jensen-Shannon divergence. Based on the proposed efficiency metric, a cognitive effort measure (CEM) is developed by using the concept of quantity of information. Psychophysical eye tracking studies, in virtual reality based driving, are conducted to reveal that the new metric of visual scanning efficiency can be employed very well as a proxy evaluation for driving performance. In addition, the effectiveness of the proposed cognitive effort measure is demonstrated by a strong correlation between this measure and pupil size change. These results suggest that the exploitation of eye tracking data provides an effective behaviometric for sensorimotor activity.


Geografie ◽  
2019 ◽  
Vol 124 (2) ◽  
pp. 163-185 ◽  
Author(s):  
Jan Brus ◽  
Michal Kučera ◽  
Stanislav Popelka

Be understanding of uncertainty, or the difference between a real geographic phenomenon and the user’s understanding of that phenomenon, is essential for those who work with spatial data. From this perspective, map symbols can be used as a tool for providing information about the level of uncertainty. Nevertheless, communicating uncertainty to the user in this way can be a challenging task. Be main aim of the paper is to propose intuitive symbols to represent uncertainty. Bis goal is achieved by user testing of specially compiled point symbol sets. Emphasis is given to the intuitiveness and easy interpretation of proposed symbols. Symbols are part of a user-centered eye-tracking experiment designed to evaluate the suitability of the proposed solutions. Eye-tracking data is analyzed to determine the subject’s performance in reading the map symbols. Be analyses include the evaluation of observed parameters, user preferences, and cognitive metrics. Based on these, the most appropriate methods for designing point symbols are recommended and discussed.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1949
Author(s):  
Xiang Li ◽  
Rabih Younes ◽  
Diana Bairaktarova ◽  
Qi Guo

The difficulty level of learning tasks is a concern that often needs to be considered in the teaching process. Teachers usually dynamically adjust the difficulty of exercises according to the prior knowledge and abilities of students to achieve better teaching results. In e-learning, because there is no teacher involvement, it often happens that the difficulty of the tasks is beyond the ability of the students. In attempts to solve this problem, several researchers investigated the problem-solving process by using eye-tracking data. However, although most e-learning exercises use the form of filling in blanks and choosing questions, in previous works, research focused on building cognitive models from eye-tracking data collected from flexible problem forms, which may lead to impractical results. In this paper, we build models to predict the difficulty level of spatial visualization problems from eye-tracking data collected from multiple-choice questions. We use eye-tracking and machine learning to investigate (1) the difference of eye movement among questions from different difficulty levels and (2) the possibility of predicting the difficulty level of problems from eye-tracking data. Our models resulted in an average accuracy of 87.60% on eye-tracking data of questions that the classifier has seen before and an average of 72.87% on questions that the classifier has not yet seen. The results confirmed that eye movement, especially fixation duration, contains essential information on the difficulty of the questions and it is sufficient to build machine-learning-based models to predict difficulty level.


2019 ◽  
Vol 10 (3) ◽  
pp. 2127-2131
Author(s):  
Akshay S ◽  
Ashika P ◽  
Aswathy Ramesh

Eye-tracking is an emerging area of science in a wide range of computer vision-based applications. Eye-tracking mainly deals with where the person is looking at and for what duration. In this work, we propose an R based interface to visualize the eye-tracking data as fixations and saccades that depicts where the person looking at –fixations and saccades and what duration – fixation duration. Through the eye-tracking metrics that are visualized in our work, one can visualize the difference between the viewing behaviour of various participants. The differences thus depicted can later be studied in order to understand the cognitive abilities of the participants. The paper contains a detailed survey of the existing literature and the experimental results generated using the R interface.


Field Methods ◽  
2017 ◽  
Vol 29 (4) ◽  
pp. 383-394 ◽  
Author(s):  
Cornelia E. Neuert

Previous research has shown that check-all-that-apply (CATA) and forced-choice (FC) question formats do not produce comparable results. The cognitive processes underlying respondents’ answers to both types of formats still require clarification. This study contributes to filling this gap by using eye-tracking data. Both formats are compared by analyzing attention processes and the cognitive effort respondents spend while answering one factual and one opinion question, respectively. No difference in cognitive effort spent on the factual question was found, whereas for the opinion question, respondents invested more cognitive effort in the FC than in the CATA condition. The findings indicate that higher endorsement in FC questions cannot only be explained by question format. Other possible causes are discussed.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


2015 ◽  
Vol 23 (9) ◽  
pp. 1508
Author(s):  
Qiandong WANG ◽  
Qinggong LI ◽  
Kaikai CHEN ◽  
Genyue FU

2019 ◽  
Vol 19 (2) ◽  
pp. 345-369 ◽  
Author(s):  
Constantina Ioannou ◽  
Indira Nurdiani ◽  
Andrea Burattin ◽  
Barbara Weber

Sign in / Sign up

Export Citation Format

Share Document