Emergency, Pictogram-Based Augmented Reality Medical Communicator Prototype Using Precise Eye-Tracking Technology

2019 ◽  
Vol 22 (2) ◽  
pp. 151-157
Author(s):  
Krzysztof Wołk
Author(s):  
Ming Tang

Signs, in all their forms and manifestations, provide visual communication for wayfinding, commerce, and public dialogue and expression. Yet, how effectively a sign communicates and ultimately elicits a desired reaction begins with how well it attracts the visual attention of prospective viewers. This is especially the case for complex visual environments, both outside and inside of buildings. This paper presents the results of an exploratory research design to assess the use of eye-tracking (ET) technology to explore how placement and context affect the capture of visual attention. Specifically, this research explores the use of ET hardware and software in real-world contexts to analyze how visual attention is impacted by location and proximity to geometric edges, as well as elements of contrast, intensity against context, and facial features. Researchers also used data visualization and interpretation tools in augmented reality environments to anticipate human responses to alternative placement and design. Results show that ET methods, supported by the screen-based and wearable eye-tracking technologies, can provide results that are consistent with previous research of signage performance using static images in terms of cognitive load and legibility, and ET technologies offer an advanced dynamic tool for the design and placement of signage.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


2021 ◽  
Vol 15 ◽  
pp. 183449092110004
Author(s):  
Jing Yu ◽  
Xue-Rui Peng ◽  
Ming Yan

People employ automatic inferential processing when confronting pragmatically implied claims in advertising. However, whether comprehension and memorization of pragmatic implications differ between young and older adults is unclear. In the present study, we used eye-tracking technology to investigate online cognitive processes during reading of misleading advertisements. We found an interaction between age and advertising content, manifested as our older participants generated higher misleading rates in health-related than in health-irrelevant products, whereas this content-bias did not appear in their younger counterparts. Eye movement data further showed that the older adults spent more time processing critical claims for the health-related products than for the health-irrelevant products. Moreover, the correlations between fixation duration on pragmatic implications and misleading rates showed opposite trends in the two groups. The eye-tracking evidence novelly suggests that young and older adults may adopt different information processing strategies to comprehend pragmatic implications in advertising: More reading possibly enhances young adults’ gist memory whereas it facilitates older adults’ verbatim memory instead.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2015 ◽  
Vol 43 (6) ◽  
pp. 561-574 ◽  
Author(s):  
Patricia Huddleston ◽  
Bridget K. Behe ◽  
Stella Minahan ◽  
R. Thomas Fernandez

Purpose – The purpose of this paper is to elucidate the role that visual measures of attention to product and information and price display signage have on purchase intention. The authors assessed the effect of visual attention to the product, information or price sign on purchase intention, as measured by likelihood to buy. Design/methodology/approach – The authors used eye-tracking technology to collect data from Australian and US garden centre customers, who viewed eight plant displays in which the signs had been altered to show either price or supplemental information (16 images total). The authors compared the role of visual attention to price and information sign, and the role of visual attention to the product when either sign was present on likelihood to buy. Findings – Overall, providing product information on a sign without price elicited higher likelihood to buy than providing a sign with price. The authors found a positive relationship between visual attention to price on the display sign and likelihood to buy, but an inverse relationship between visual attention to information and likelihood to buy. Research limitations/implications – An understanding of the attention-capturing power of merchandise display elements, especially signs, has practical significance. The findings will assist retailers in creating more effective and efficient display signage content, for example, featuring the product information more prominently than the price. The study was conducted on a minimally packaged product, live plants, which may reduce the ability to generalize findings to other product types. Practical implications – The findings will assist retailers in creating more effective and efficient display signage content. The study used only one product category (plants) which may reduce the ability to generalize findings to other product types. Originality/value – The study is one of the first to use eye-tracking in a macro-level, holistic investigation of the attention-capturing value of display signage information and its relationship to likelihood to buy. Researchers, for the first time, now have the ability to empirically test the degree to which attention and decision-making are linked.


Heart Rhythm ◽  
2021 ◽  
Vol 18 (8) ◽  
pp. S356
Author(s):  
Heather Marie Giacone ◽  
Anne M. Dubin ◽  
Scott Ceresnak ◽  
Henry Chubb ◽  
William Rowland Goodyer ◽  
...  

Author(s):  
Sarah D’Angelo ◽  
Bertrand Schneider

Abstract The past decade has witnessed a growing interest for using dual eye tracking to understand and support remote collaboration, especially with studies that have established the benefits of displaying gaze information for small groups. While this line of work is promising, we lack a consistent framework that researchers can use to organize and categorize studies on the effect of shared gaze on social interactions. There exists a wide variety of terminology and methods for describing attentional alignment; researchers have used diverse techniques for designing gaze visualizations. The settings studied range from real-time peer collaboration to asynchronous viewing of eye-tracking video of an expert providing explanations. There has not been a conscious effort to synthesize and understand how these different approaches, techniques and applications impact the effectiveness of shared gaze visualizations (SGVs). In this paper, we summarize the related literature and the benefits of SGVs for collaboration, describe important terminology as well as appropriate measures for the dual eye-tracking space and discuss promising directions for future research. As eye-tracking technology becomes more ubiquitous, there is pressing need to develop a consistent approach to evaluation and design of SGVs. The present paper makes a first and significant step in this direction.


Sign in / Sign up

Export Citation Format

Share Document