Eye Gaze Accuracy in the Projection-based Stereoscopic Display as a Function of Number of Fixation, Eye Movement Time, and Parallax

Author(s):  
Yogi Tri Prasetyo ◽  
Retno Widyaningrum ◽  
Chiuhsiang Joe Lin
2018 ◽  
Vol 11 (6) ◽  
Author(s):  
Chiuhsiang Joe Lin ◽  
Yogi Tri Prasetyo ◽  
Retno Widyaningrum

The current study applied Structural Equation Modeling (SEM) to analyze the relationship among index of difficulty (ID) and parallax on eye movement time (EMT), fixation duration (FD), time to first fixation (TFF), number of fixation (NF), and eye gaze accuracy (AC) simultaneously. EMT, FD, TFF, NF, and AC were measured in the projection-based stereoscopic display by utilizing Tobii eye tracker system. SEM proved that ID had significant direct effects on EMT, NF, and FD also a significant indirect effect on NF. However, ID was found not a strong predictor for AC. SEM also proved that parallax had significant direct effects on EMT, NF, FD, TFF, and AC. Apart from the direct effect, parallax also had significant indirect effects on NF and AC. Regarding the interrelationship among dependent variables, there were significant indirect effects of FD and TFF on AC. Our results concluded that higher AC was achieved by lowering parallax (at the screen), longer EMT, higher NF, longer FD, and longer TF,


1997 ◽  
Vol 85 (2) ◽  
pp. 705-718 ◽  
Author(s):  
Chia-Fen Chi ◽  
Chia-Liang Lin

The current experiment examined the speed-accuracy trade-off of saccadic movement between two targets. Ten subjects looked alternately at two targets as fast and as accurately as possible for 2 min. under different conditions of target size, distance between targets, and direction of eye movement. Saccadic movement of the left eye was tracked and recorded with an infrared eye monitoring device to compute the starting position, ending position, and duration of each saccadic movement. Eye-movement time was significantly related to target size and distance between targets, but the speed-accuracy trade-off was significantly different from that predicted by Fitts' Law. Reaction time was not significantly changed by the direction of eye movement.


2016 ◽  
Vol 9 (5) ◽  
Author(s):  
Chiuhsiang Joe Lin ◽  
Retno Widyaningrum

This study investigated eye pointing in stereoscopic displays. Ten participants performed 18 tapping tasks in stereoscopic displays with three different levels of parallax (at the screen, 20 cm and 50 cm in front of the screen). The results showed that parallax had significant effects on hand movement time, eye movement time, index of performance in hand click and eye gaze. The movement time was shorter and the performance was better when the target was at the screen, compared to the conditions when the targets were seen at 20 cm and 50 cm in front of the screen. Furthermore, the findings of this study supports that the eye movement in stereoscopic displays follows the Fitts’ law. The proposed algorithm was effective on the eye gaze selection to improve the good fit of eye movement in stereoscopic displays.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


Author(s):  
James Kim

The purpose of this study was to examine factors that influence how people look at objects they will have to act upon while watching others interact with them first. We investigated whether including different types of task-relevant information into an observational learning task would result in participants adapting their gaze towards an object with more task-relevant information. The participant watched an actor simultaneously lift and replace two objects with two hands then was cued to lift one of the two objects. The objects had the potential to change weight between each trial. In our cue condition, participants were cued to lift one of the objects every single time. In our object condition, the participants were cued equally to act on both objects; however, the weights of only one of the objects would have the potential to change. The hypothesis in the cue condition was that the participant would look significantly more at the object being cued. The hypothesis for the object condition was that the participant would look significantly more (i.e. adapt their gaze) at the object changing weight. The rationale behind this is that participants will learn to allocate their gaze significantly more towards that object so they can gain information about its properties (i.e. weight change). Pending results will indicate whether or not this occurred, and has implications for understanding eye movement sequences in visually guided behaviour tasks. The outcome of this study also has implications for the mechanisms of eye gaze with respect to social learning tasks. 


2020 ◽  
Vol 10 (3) ◽  
pp. 51
Author(s):  
DongMin Jang ◽  
IlHo Yang ◽  
SeoungUn Kim

The purpose of this study was to detect mind-wandering experienced by pre-service teachers during a video learning lecture on physics. The lecture was videotaped and consisted of a live lecture in a classroom. The lecture was about Gauss's law on physics. We investigated whether oculomotor data and eye movements could be used as a marker to indicate the learner’s mind-wandering. Each data was collected in a study in which 24 pre-service teachers (16 females and 8 males) reported mind-wandering experience through self-caught method while learning physics video lecture during 30 minutes. A Tobii Pro Spectrum (sampling rate: 300 Hz) was used to capture their eye-gaze during learning Gauss's law through a course video. After watching the video lecture, we interviewed pre-service teachers about their mind-wandering experience. We first used the self-caught method to capture the mind-wandering timing of pre-service teachers while learning from video lectures. We detected more accurate mind-wandering segments by comparing fixation duration and saccade count. We investigated two types of oculomotor data (blink count, pupil size) and nine eye movements (average peak velocity of saccades; maximum peak velocity of saccades; standard deviation of peak velocity of saccades; average amplitude of saccades; maximum amplitude of saccades; total amplitude of saccades; saccade count/s; fixation duration; fixation dispersion). The result was that the blink count could not be used as a marker for mind-wandering during learning video lectures among them (oculomotor data and eye movements), unlike previous literatures. Based on the results of this study, we identified elements that can be used as mind-wandering markers while learning from video lectures that are similar to real classes, among the oculomotor data and eye movement mentioned in previous literatures. Additionally, we found that most participants focused on past thoughts and felt unpleasant after experiencing mind-wandering through interview analysis.


2013 ◽  
Vol 2013 ◽  
pp. 1-13
Author(s):  
Satoshi Suzuki ◽  
Asato Yoshinari ◽  
Kunihiko Kuronuma

For an establishment of a skill evaluation method for human support systems, development of an estimating equation of the machine operational skill is presented. Factors of the eye movement such as frequency, velocity, and moving distance of saccade were computed using the developed eye gaze measurement system, and the eye movement features were determined from these factors. The estimating equation was derived through an outlier test (to eliminate nonstandard data) and a principal component analysis (to find dominant components). Using a cooperative carrying task (cc-task) simulator, the eye movement and operational data of the machine operators were recorded, and effectiveness of the derived estimating equation was investigated. As a result, it was confirmed that the estimating equation was effective strongly against actual simple skill levels (r=0.56–0.84). In addition, effects of internal condition such as fatigue and stress on the estimating equation were analyzed. Using heart rate (HR) and coefficient of variation of R-R interval (Cvrri). Correlation analysis between these biosignal indexes and the estimating equation of operational skill found that the equation reflected effects of stress and fatigue, although the equation could estimate the skill level adequately.


2020 ◽  
Author(s):  
Woochul Choi ◽  
Hyeonsu Lee ◽  
Se-Bum Paik

AbstractBistable perception is characterized by periodic alternation between two different perceptual interpretations, the mechanism of which is poorly understood. Herein, we show that perceptual decisions in bistable perception are strongly correlated with slow rhythmic eye motion, the frequency of which varies across individuals. From eye gaze trajectory measurements during three types of bistable tasks, we found that each subject’s gaze position oscillates slowly(less than 1Hz), and that this frequency matches that of bistable perceptual alternation. Notably, the motion of the eye apparently moves in opposite directions before two opposite perceptual decisions, and this enables the prediction of the timing and direction of perceptual alternation from eye motion. We also found that the correlation between eye movement and a perceptual decision is maintained during variations of the alternation frequency by the intentional switching or retaining of perceived states. This result suggests that periodic bistable perception is phase-locked with rhythmic eye motion.


2021 ◽  
Author(s):  
Daniel K Bjornn ◽  
Julie Van ◽  
Brock Kirwan

Pattern separation and pattern completion are generally studied in humans using mnemonic discrimination tasks such as the Mnemonic Similarity Task (MST) where participants identify similar lures and repeated items from a series of images. Failures to correctly discriminate lures are thought to reflect a failure of pattern separation and a propensity toward pattern completion. Recent research has challenged this perspective, suggesting that poor encoding rather than pattern completion accounts for the occurrence of false alarm responses to similar lures. In two experiments, participants completed a continuous recognition task version of the MST while eye movement (Experiment 1 and 2) and fMRI data (Experiment 2) were collected. While we replicated the result that fixation counts at study predicted accuracy on lure trials, we found that target-lure similarity was a much stronger predictor of accuracy on lure trials across both experiments. Lastly, we found that fMRI activation changes in the hippocampus were significantly correlated with the number of fixations at study for correct but not incorrect mnemonic discrimination judgments when controlling for target-lure similarity. Our findings indicate that while eye movements during encoding predict subsequent hippocampal activation changes, mnemonic discrimination performance is better described by pattern separation and pattern completion processes that are influenced by target-lure similarity than simply poor encoding.


2019 ◽  
Vol 11 (3) ◽  
pp. 168781401881953
Author(s):  
Guichao Ren ◽  
Xiaohua Zhao ◽  
Zhanzhou Lin ◽  
Wenxiang Xu

The development of freeway construction and the increasing coverage of the road network have led to increasing requirements for guide signs. This article investigated drivers’ visual cognition pattern regarding exit guide signs on freeway interchanges. A static visual cognition experiment with 32 participants was carried out. The route information volume (four levels) and destination information volume (seven levels) were selected as the variables. An eye-tracking system was utilized to record drivers’ eye movement indicators, such as eye movement time, saccade frequency, seek time, and fixation duration. The results indicated that the eye movement time, saccade frequency, and seek time are highly correlated with information volume and increase significantly with the increases in information volume; although the fixation duration has no correlation with information volume, the fixation duration value, saccade frequency, and seek time of destination information are significantly higher than those of route information, and the destination information fulfills a stronger guiding function during the driver’s trip. The corresponding threshold values of destination information are 5, 5, 4, and 3 under the four levels of route information, and the threshold value of route information is 3.


Sign in / Sign up

Export Citation Format

Share Document