scholarly journals Maintaining the Sense of Agency in Semi-Autonomous Robot Conferencing

2019 ◽  
Vol 11 (7) ◽  
pp. 143
Author(s):  
Tanaka ◽  
Takenouchi ◽  
Ogawa ◽  
Yoshikawa ◽  
Nishio ◽  
...  

In semi-autonomous robot conferencing, not only the operator controls the robot, but the robot itself also moves autonomously. Thus, it can modify the operator’s movement (e.g., adding social behaviors). However, the sense of agency, that is, the degree of feeling that the movement of the robot is the operator’s own movement, would decrease if the operator is conscious of the discrepancy between the teleoperation and autonomous behavior. In this study, we developed an interface to control the robot head by using an eye tracker. When the robot autonomously moves its eye-gaze position, the interface guides the operator’s eye movement towards this autonomous movement. The experiment showed that our interface can maintain the sense of agency, because it provided the illusion that the autonomous behavior of a robot is directed by the operator’s eye movement. This study reports the conditions of how to provide this illusion in semi-autonomous robot conferencing.

2020 ◽  
Author(s):  
Woochul Choi ◽  
Hyeonsu Lee ◽  
Se-Bum Paik

AbstractBistable perception is characterized by periodic alternation between two different perceptual interpretations, the mechanism of which is poorly understood. Herein, we show that perceptual decisions in bistable perception are strongly correlated with slow rhythmic eye motion, the frequency of which varies across individuals. From eye gaze trajectory measurements during three types of bistable tasks, we found that each subject’s gaze position oscillates slowly(less than 1Hz), and that this frequency matches that of bistable perceptual alternation. Notably, the motion of the eye apparently moves in opposite directions before two opposite perceptual decisions, and this enables the prediction of the timing and direction of perceptual alternation from eye motion. We also found that the correlation between eye movement and a perceptual decision is maintained during variations of the alternation frequency by the intentional switching or retaining of perceived states. This result suggests that periodic bistable perception is phase-locked with rhythmic eye motion.


2018 ◽  
Vol 11 (6) ◽  
Author(s):  
Chiuhsiang Joe Lin ◽  
Yogi Tri Prasetyo ◽  
Retno Widyaningrum

The current study applied Structural Equation Modeling (SEM) to analyze the relationship among index of difficulty (ID) and parallax on eye movement time (EMT), fixation duration (FD), time to first fixation (TFF), number of fixation (NF), and eye gaze accuracy (AC) simultaneously. EMT, FD, TFF, NF, and AC were measured in the projection-based stereoscopic display by utilizing Tobii eye tracker system. SEM proved that ID had significant direct effects on EMT, NF, and FD also a significant indirect effect on NF. However, ID was found not a strong predictor for AC. SEM also proved that parallax had significant direct effects on EMT, NF, FD, TFF, and AC. Apart from the direct effect, parallax also had significant indirect effects on NF and AC. Regarding the interrelationship among dependent variables, there were significant indirect effects of FD and TFF on AC. Our results concluded that higher AC was achieved by lowering parallax (at the screen), longer EMT, higher NF, longer FD, and longer TF,


2020 ◽  
Vol 52 (6) ◽  
pp. 2515-2534 ◽  
Author(s):  
Diederick C. Niehorster ◽  
Raimondas Zemblys ◽  
Tanya Beelders ◽  
Kenneth Holmqvist

AbstractThe magnitude of variation in the gaze position signals recorded by an eye tracker, also known as its precision, is an important aspect of an eye tracker’s data quality. However, data quality of eye-tracking signals is still poorly understood. In this paper, we therefore investigate the following: (1) How do the various available measures characterizing eye-tracking data during fixation relate to each other? (2) How are they influenced by signal type? (3) What type of noise should be used to augment eye-tracking data when evaluating eye-movement analysis methods? To support our analysis, this paper presents new measures to characterize signal type and signal magnitude based on RMS-S2S and STD, two established measures of precision. Simulations are performed to investigate how each of these measures depends on the number of gaze position samples over which they are calculated, and to reveal how RMS-S2S and STD relate to each other and to measures characterizing the temporal spectrum composition of the recorded gaze position signal. Further empirical investigations were performed using gaze position data recorded with five eye trackers from human and artificial eyes. We found that although the examined eye trackers produce gaze position signals with different characteristics, the relations between precision measures derived from simulations are borne out by the data. We furthermore conclude that data with a range of signal type values should be used to assess the robustness of eye-movement analysis methods. We present a method for generating artificial eye-tracker noise of any signal type and magnitude.


Healthcare ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Chong-Bin Tsai ◽  
Wei-Yu Hung ◽  
Wei-Yen Hsu

Optokinetic nystagmus (OKN) is an involuntary eye movement induced by motion of a large proportion of the visual field. It consists of a “slow phase (SP)” with eye movements in the same direction as the movement of the pattern and a “fast phase (FP)” with saccadic eye movements in the opposite direction. Study of OKN can reveal valuable information in ophthalmology, neurology and psychology. However, the current commercially available high-resolution and research-grade eye tracker is usually expensive. Methods & Results: We developed a novel fast and effective system combined with a low-cost eye tracking device to accurately quantitatively measure OKN eye movement. Conclusions: The experimental results indicate that the proposed method achieves fast and promising results in comparisons with several traditional approaches.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1051
Author(s):  
Si Jung Kim ◽  
Teemu H. Laine ◽  
Hae Jung Suk

Presence refers to the emotional state of users where their motivation for thinking and acting arises based on the perception of the entities in a virtual world. The immersion level of users can vary when they interact with different media content, which may result in different levels of presence especially in a virtual reality (VR) environment. This study investigates how user characteristics, such as gender, immersion level, and emotional valence on VR, are related to the three elements of presence effects (attention, enjoyment, and memory). A VR story was created and used as an immersive stimulus in an experiment, which was presented through a head-mounted display (HMD) equipped with an eye tracker that collected the participants’ eye gaze data during the experiment. A total of 53 university students (26 females, 27 males), with an age range from 20 to 29 years old (mean 23.8), participated in the experiment. A set of pre- and post-questionnaires were used as a subjective measure to support the evidence of relationships among the presence effects and user characteristics. The results showed that user characteristics, such as gender, immersion level, and emotional valence, affected their level of presence, however, there is no evidence that attention is associated with enjoyment or memory.


2014 ◽  
Vol 607 ◽  
pp. 664-668
Author(s):  
Zhi Hui Liu ◽  
Sheng Ze Wang ◽  
Qiong Shen ◽  
Jia Jun Feng

This study investigates the characteristics of eye movements by operating flat knitting machine. For the objective evaluation purpose of the flat knitting machine operation interface, we arrange participants finish operation tasks on the interface, then use eye tracker to analyze and evaluate the layout design. Through testing of the different layout designs, we get fixation sequences, the count of fixation, heat maps, and fixation length. The results showed that the layout design could significantly affect the eye-movement, especially the fixation sequences and the heat maps, the count of fixation and fixation length are always impacted by operation tasks. Overall, data obtained from eye movements can not only be used to evaluate the operation interface, but also significantly enhance the layout design of the flat knitting machine.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


Author(s):  
James Kim

The purpose of this study was to examine factors that influence how people look at objects they will have to act upon while watching others interact with them first. We investigated whether including different types of task-relevant information into an observational learning task would result in participants adapting their gaze towards an object with more task-relevant information. The participant watched an actor simultaneously lift and replace two objects with two hands then was cued to lift one of the two objects. The objects had the potential to change weight between each trial. In our cue condition, participants were cued to lift one of the objects every single time. In our object condition, the participants were cued equally to act on both objects; however, the weights of only one of the objects would have the potential to change. The hypothesis in the cue condition was that the participant would look significantly more at the object being cued. The hypothesis for the object condition was that the participant would look significantly more (i.e. adapt their gaze) at the object changing weight. The rationale behind this is that participants will learn to allocate their gaze significantly more towards that object so they can gain information about its properties (i.e. weight change). Pending results will indicate whether or not this occurred, and has implications for understanding eye movement sequences in visually guided behaviour tasks. The outcome of this study also has implications for the mechanisms of eye gaze with respect to social learning tasks. 


2020 ◽  
Vol 10 (5) ◽  
pp. 1668 ◽  
Author(s):  
Pavan Kumar B. N. ◽  
Adithya Balasubramanyam ◽  
Ashok Kumar Patil ◽  
Chethana B. ◽  
Young Ho Chai

Over the years, gaze input modality has been an easy and demanding human–computer interaction (HCI) method for various applications. The research of gaze-based interactive applications has advanced considerably, as HCIs are no longer constrained to traditional input devices. In this paper, we propose a novel immersive eye-gaze-guided camera (called GazeGuide) that can seamlessly control the movements of a camera mounted on an unmanned aerial vehicle (UAV) from the eye-gaze of a remote user. The video stream captured by the camera is fed into a head-mounted display (HMD) with a binocular eye tracker. The user’s eye-gaze is the sole input modality to maneuver the camera. A user study was conducted considering the static and moving targets of interest in a three-dimensional (3D) space to evaluate the proposed framework. GazeGuide was compared with a state-of-the-art input modality remote controller. The qualitative and quantitative results showed that the proposed GazeGuide performed significantly better than the remote controller.


Sign in / Sign up

Export Citation Format

Share Document