Effect of Adaptive Guidance and Visualization Literacy on Gaze Attentive Behaviors and Sequential Patterns on Magazine-Style Narrative Visualizations

2021 ◽  
Vol 11 (3-4) ◽  
pp. 1-46
Author(s):  
Oswald Barral ◽  
SÉbastien LallÉ ◽  
Alireza Iranpour ◽  
Cristina Conati

We study the effectiveness of adaptive interventions at helping users process textual documents with embedded visualizations, a form of multimodal documents known as Magazine-Style Narrative Visualizations (MSNVs). The interventions are meant to dynamically highlight in the visualization the datapoints that are described in the textual sentence currently being read by the user, as captured by eye-tracking. These interventions were previously evaluated in two user studies that involved 98 participants reading excerpts of real-world MSNVs during a 1-hour session. Participants’ outcomes included their subjective feedback about the guidance, and well as their reading time and score on a set of comprehension questions. Results showed that the interventions can increase comprehension of the MSNV excerpts for users with lower levels of a cognitive skill known as visualization literacy. In this article, we aim to further investigate this result by leveraging eye-tracking to analyze in depth how the participants processed the interventions depending on their levels of visualization literacy. We first analyzed summative gaze metrics that capture how users process and integrate the key components of the narrative visualizations. Second, we mined the salient patterns in the users’ scanpaths to contextualize how users sequentially process these components. Results indicate that the interventions succeed in guiding attention to salient components of the narrative visualizations, especially by generating more transitions between key components of the visualization (i.e., datapoints, labels, and legend), as well as between the two modalities (text and visualization). We also show that the interventions help users with lower levels of visualization literacy to better map datapoints to the legend, which likely contributed to their improved comprehension of the documents. These findings shed light on how adaptive interventions help users with different levels of visualization literacy, informing the design of personalized narrative visualizations.

2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


Author(s):  
Wenqiang Chen ◽  
Lin Chen ◽  
Meiyi Ma ◽  
Farshid Salemi Parizi ◽  
Shwetak Patel ◽  
...  

Wearable devices, such as smartwatches and head-mounted devices (HMD), demand new input devices for a natural, subtle, and easy-to-use way to input commands and text. In this paper, we propose and investigate ViFin, a new technique for input commands and text entry, which harness finger movement induced vibration to track continuous micro finger-level writing with a commodity smartwatch. Inspired by the recurrent neural aligner and transfer learning, ViFin recognizes continuous finger writing, works across different users, and achieves an accuracy of 90% and 91% for recognizing numbers and letters, respectively. We quantify our approach's accuracy through real-time system experiments in different arm positions, writing speeds, and smartwatch position displacements. Finally, a real-time writing system and two user studies on real-world tasks are implemented and assessed.


2014 ◽  
Vol 2 (3) ◽  
pp. 343-359 ◽  
Author(s):  
O. Kaminska ◽  
T. Foulsham
Keyword(s):  

2020 ◽  
Author(s):  
Anna Kosovicheva ◽  
Abla Alaoui-Soce ◽  
Jeremy Wolfe

Many real-world visual tasks involve searching for multiple instances of a target (e.g., picking ripe berries). What strategies do observers use when collecting items in this type of search? Do they wait to finish collecting the current item before starting to look for the next target, or do they search ahead for future targets? We utilized behavioral and eye tracking measures to distinguish between these two possibilities in foraging search. Experiment 1 used a color wheel technique in which observers searched for T shapes among L shapes while all items independently cycled through a set of colors. Trials were abruptly terminated, and observers reported both the color and location of the next target that they intended to click. Using observers’ color reports to infer target-finding times, we demonstrate that observers found the next item before the time of the click on the current target. We validated these results in Experiment 2 by recording fixation locations around the time of each click. Experiment 3 utilized a different procedure, in which all items were intermittently occluded during the trial. We then calculated a distribution of when targets were visible around the time of each click, allowing us to infer when they were most likely found. In a fourth and final experiment, observers indicated the locations of multiple future targets after the search was abruptly terminated. Together, our results provide converging evidence to demonstrate that observers can find the next target before collecting the current target and can typically forage 1-2 items ahead.


2019 ◽  
Author(s):  
Gwendolyn L Rehrig ◽  
Candace Elise Peacock ◽  
Taylor Hayes ◽  
Fernanda Ferreira ◽  
John M. Henderson

The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping interactions objects in the scene afford), given that affordances have been implicated in both visual and linguistic processing. We quantified image salience, meaning, and graspability for real-world scenes. In three eyetracking experiments, native English speakers described possible actions that could be carried out in a scene. We hypothesized that graspability would preferentially guide attention due to its task-relevance. In two experiments using stimuli from a previous study, meaning explained visual attention better than graspability or salience did, and graspability explained attention better than salience. In a third experiment we quantified image salience, meaning, graspability, and reach-weighted graspability for scenes that depicted reachable spaces containing graspable objects. Graspability and meaning explained attention equally well in the third experiment, and both explained attention better than salience. We conclude that speakers use object graspability to allocate attention to plan descriptions when scenes depict graspable objects within reach, and otherwise rely more on general meaning. The results shed light on what aspects of meaning guide attention during scene viewing in language production tasks.


Author(s):  
Helene Gelderblom ◽  
Funmi Adebesin ◽  
Jacques Brosens ◽  
Rendani Kruger

In this article the authors describe how they incorporate eye tracking in a human-computer interaction (HCI) course that forms part of a postgraduate Informatics degree. The focus is on an eye tracking assignment that involves student groups performing usability evaluation studies for real world clients. Over the past three years the authors have observed how this experience positively affected students' attitude towards usability and user experience (UX) evaluation. They therefore believe that eye tracking is a powerful tool to convince students of the importance of user centered design. To investigate the soundness of their informal observations, the authors conducted a survey amongst 2016 HCI students and analysed student course evaluation results from 2014 to 2016. The findings confirm that students regard the eye tracking assignment as a mind altering experience and that it is potentially an effective tool for convincing future IT professionals of the importance of usability, UX and user centered design.


2018 ◽  
Vol 25 (5) ◽  
pp. 819-826 ◽  
Author(s):  
Yoon Koh

Developing a thick portfolio of multiple brands across different levels of services is unique to the lodging industry. Therefore, consideration of brand diversification necessitates thoughts of segment diversification to the lodging portfolio development. Although various diversification strategies have investigated in relation to a firm’s performance, segment diversification has received insufficient attention. This article aims to shed light on that. This article finds evidence that brand diversification increases lodging firm value more significantly when segment is diversified at the same time. When a company diversifies brands within a focused lodging segment, increase in firm value was insignificant.


2020 ◽  
Vol 110 (4) ◽  
pp. 1206-1230 ◽  
Author(s):  
Abhijit V. Banerjee ◽  
Sylvain Chassang ◽  
Sergio Montero ◽  
Erik Snowberg

This paper studies the problem of experiment design by an ambiguity-averse decision-maker who trades off subjective expected performance against robust performance guarantees. This framework accounts for real-world experimenters’ preference for randomization. It also clarifies the circumstances in which randomization is optimal: when the available sample size is large and robustness is an important concern. We apply our model to shed light on the practice of rerandomization, used to improve balance across treatment and control groups. We show that rerandomization creates a trade-off between subjective performance and robust performance guarantees. However, robust performance guarantees diminish very slowly with the number of rerandomizations. This suggests that moderate levels of rerandomization usefully expand the set of acceptable compromises between subjective performance and robustness. Targeting a fixed quantile of balance is safer than targeting an absolute balance objective. (JEL C90, D81)


2014 ◽  
Vol 23 (1) ◽  
pp. 51-70 ◽  
Author(s):  
Andreas Riener ◽  
Pierre Chalfoun ◽  
Claude Frasson

In the long history of subliminal messages and perception, many contradictory results have been presented. One group of researchers suggests that subliminal interaction techniques improve human–computer interaction by reducing sensory workload, whereas others have found that subliminal perception does not work. In this paper, we want to challenge this prejudice by first defining a terminology and introducing a theoretical taxonomy of mental processing states, then reviewing and discussing the potential of subliminal approaches for different sensory channels, and finally recapitulating the findings from our studies on subliminally triggered behavior change. Our objective is to mitigate driving problems caused by excessive information. Therefore, this work focuses on subliminal techniques applied to driver–vehicle interaction to induce a nonconscious change in driver behavior. Based on a survey of related work which identified the potential of subliminal cues in driving, we conducted two user studies assessing their applicability in real-world situations. The first study evaluated whether subtle (subliminal) vibrations could promote economical driving, and the second exposed drivers to very briefly flashed visual stimuli to assess their potential to improve steering behavior. Our results suggest that subliminal approaches are indeed feasible to provide drivers with added driving support without dissipating attention resources. Despite the lack of general evidence for uniform effectiveness of such interfaces in all driving circumstances, we firmly believe that such interfaces are valuable since they may eventually prevent accidents, save lives, and even reduce fuel costs and CO2 emissions for some drivers. For all these reasons, we are confident that subliminally driven interfaces will find their way into cars of the (near) future.


Sign in / Sign up

Export Citation Format

Share Document