A Novel 3D Interaction Technique Based on the Eye Tracking for Mixed Reality Environments

Author(s):  
Giandomenico Caruso ◽  
Monica Bordegoni

The paper describes a novel 3D interaction technique based on Eye Tracking (ET) for Mixed Reality (MR) environments. We have developed a system that integrates a commercial ET technology with a MR display technology. The system elaborates the data coming from the ET in order to obtain the 3D position of the user’s gaze. A specific calibration procedure has been developed for correctly computing the gaze position according to the user. The accuracy and the precision of the system have been assessed by performing several tests with a group of users. Besides, we have compared the 3D gaze position in real, virtual and mixed environments in order to check if there are differences when the user sees real or virtual objects. The paper also proposes an application example by means of which we have assessed the global usability of the system.

Vision ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 39
Author(s):  
Julie Royo ◽  
Fabrice Arcizet ◽  
Patrick Cavanagh ◽  
Pierre Pouget

We introduce a blind spot method to create image changes contingent on eye movements. One challenge of eye movement research is triggering display changes contingent on gaze. The eye-tracking system must capture the image of the eye, discover and track the pupil and corneal reflections to estimate the gaze position, and then transfer this data to the computer that updates the display. All of these steps introduce delays that are often difficult to predict. To avoid these issues, we describe a simple blind spot method to generate gaze contingent display manipulations without any eye-tracking system and/or display controls.


Author(s):  
Piercarlo Dondi ◽  
Marco Porta ◽  
Angelo Donvito ◽  
Giovanni Volpe

AbstractInteractive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive “gaze-aware” images and an eye tracking application conceived to interact with those images with the gaze. Users can display different pictures, perform pan and zoom operations, and search for regions of interest with associated multimedia content (text, image, audio, or video). Besides being an assistive technology for motor impaired people (like most gaze-based interaction applications), our solution can also be a valid alternative to the common touch screen panels present in museums, in accordance with the new safety guidelines imposed by the COVID-19 pandemic. Experiments carried out with a panel of volunteer testers have shown that the tool is usable, effective, and easy to learn.


2017 ◽  
Vol 17 (3) ◽  
pp. 257-266 ◽  
Author(s):  
Azam Majooni ◽  
Mona Masood ◽  
Amir Akhavan

The basic premise of this research is investigating the effect of layout on the comprehension and cognitive load of the viewers in the information graphics. The term ‘Layout’ refers to the arrangement and organization of the visual and textual elements in a graphical design. The experiment conducted in this study is designed based on two stories and each one of these stories is presented with two different layouts. During the experiment, eye-tracking devices are applied to collect the gaze data including the eye movement data and pupil diameter fluctuation. In the research on the modification of the layouts, contents of each story are narrated using identical visual and textual elements. The analysis of eye-tracking data provides quantitative evidence concerning the change of layout in each story and its effect on the comprehension of participants and variation of their cognitive load. In conclusion, it can be claimed that the comprehension from the zigzag form of the layout was higher with a less imposed cognitive load.


2008 ◽  
Vol 02 (02) ◽  
pp. 207-233
Author(s):  
SATORU MEGA ◽  
YOUNES FADIL ◽  
ARATA HORIE ◽  
KUNIAKI UEHARA

Human-computer interaction systems have been developed in recent years. These systems use multimedia techniques to create Mixed-Reality environments where users can train themselves. Although most of these systems rely strongly on interactivity with the users, taking into account users' states, they still lack the possibility of considering users preferences when they help them. In this paper, we introduce an Action Support System for Interactive Self-Training (ASSIST) in cooking. ASSIST focuses on recognizing users' cooking actions as well as real objects related to these actions to be able to provide them with accurate and useful assistance. Before the recognition and instruction processes, it takes users' cooking preferences and suggests one or more recipes that are likely to satisfy their preferences by collaborative filtering. When the cooking process starts, ASSIST recognizes users' hands movement using a similarity measure algorithm called AMSS. When the recognized cooking action is correct, ASSIST instructs the user on the next cooking procedure through virtual objects. When a cooking action is incorrect, the cause of its failure is analyzed and ASSIST provides the user with support information according to the cause to improve the user's incorrect cooking action. Furthermore, we construct parallel transition models from cooking recipes for more flexible instructions. This enables users to perform necessary cooking actions in any order they want, allowing more flexible learning.


2015 ◽  
Vol 15 (1) ◽  
pp. 25-34 ◽  
Author(s):  
Daniel Fritz ◽  
Annette Mossel ◽  
Hannes Kaufmann

In mobile applications, it is crucial to provide intuitive means for 2D and 3D interaction. A large number of techniques exist to support a natural user interface (NUI) by detecting the user's hand posture in RGB+D (depth) data. Depending on the given interaction scenario and its environmental properties, each technique has its advantages and disadvantages regarding accuracy and the robustness of posture detection. While the interaction environment in a desktop setup can be constrained to meet certain requirements, a handheld scenario has to deal with varying environmental conditions. To evaluate the performance of techniques on a mobile device, a powerful software framework was developed that is capable of processing and fusing RGB and depth data directly on a handheld device. Using this framework, five existing hand posture recognition techniques were integrated and systematically evaluated by comparing their accuracy under varying illumination and background. Overall results reveal best recognition rate of posture detection for combined RGB+D data at the expense of update rate. To support users in choosing the appropriate technique for their specific mobile interaction task, we derived guidelines based on our study. In the last step, an experimental study was conducted using the detected hand postures to perform the canonical 3D interaction tasks selection and positioning in a mixed reality handheld setup.


2018 ◽  
Vol 11 (2) ◽  
Author(s):  
Sarah Vandemoortele ◽  
Kurt Feyaerts ◽  
Mark Reybrouck ◽  
Geert De Bièvre ◽  
Geert Brône ◽  
...  

Few investigations into the nonverbal communication in ensemble playing have focused on gaze behaviour up to now. In this study, the gaze behaviour of musicians playing in trios was recorded using the recently developed technique of mobile eye-tracking. Four trios (clarinet, violin, piano) were recorded while rehearsing and while playing several runs through the same musical fragment. The current article reports on an initial exploration of the data in which we describe how often gazing at the partner occurred. On the one hand, we aim to identify possible contrasting cases. On the other, we look for tendencies across the run-throughs. We discuss the quantified gaze behaviour in relation to the existing literature and the current research design.


2021 ◽  
pp. 1-19
Author(s):  
Jairo Perez-Osorio ◽  
Abdulaziz Abubshait ◽  
Agnieszka Wykowska

Abstract Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.


Sign in / Sign up

Export Citation Format

Share Document