scholarly journals Slow rhythmic eye motion predicts periodic alternation of bistable perception

2020 ◽  
Author(s):  
Woochul Choi ◽  
Hyeonsu Lee ◽  
Se-Bum Paik

AbstractBistable perception is characterized by periodic alternation between two different perceptual interpretations, the mechanism of which is poorly understood. Herein, we show that perceptual decisions in bistable perception are strongly correlated with slow rhythmic eye motion, the frequency of which varies across individuals. From eye gaze trajectory measurements during three types of bistable tasks, we found that each subject’s gaze position oscillates slowly(less than 1Hz), and that this frequency matches that of bistable perceptual alternation. Notably, the motion of the eye apparently moves in opposite directions before two opposite perceptual decisions, and this enables the prediction of the timing and direction of perceptual alternation from eye motion. We also found that the correlation between eye movement and a perceptual decision is maintained during variations of the alternation frequency by the intentional switching or retaining of perceived states. This result suggests that periodic bistable perception is phase-locked with rhythmic eye motion.

2019 ◽  
Vol 11 (7) ◽  
pp. 143
Author(s):  
Tanaka ◽  
Takenouchi ◽  
Ogawa ◽  
Yoshikawa ◽  
Nishio ◽  
...  

In semi-autonomous robot conferencing, not only the operator controls the robot, but the robot itself also moves autonomously. Thus, it can modify the operator’s movement (e.g., adding social behaviors). However, the sense of agency, that is, the degree of feeling that the movement of the robot is the operator’s own movement, would decrease if the operator is conscious of the discrepancy between the teleoperation and autonomous behavior. In this study, we developed an interface to control the robot head by using an eye tracker. When the robot autonomously moves its eye-gaze position, the interface guides the operator’s eye movement towards this autonomous movement. The experiment showed that our interface can maintain the sense of agency, because it provided the illusion that the autonomous behavior of a robot is directed by the operator’s eye movement. This study reports the conditions of how to provide this illusion in semi-autonomous robot conferencing.


2018 ◽  
Author(s):  
Woochul Choi ◽  
Se-Bum Paik

A subject-specific process of accumulation of information may be responsible for variations in decision time following visual perceptions in humans. A detailed profile of this perceptual decision making, however, has not yet been verified. Using a coherence-varying motion discrimination task, we precisely measured the perceptual decision kernel of subjects. We observed that the kernel size (decision time) is consistent within subjects, independent of stimulus dynamics, and the observed kernel could accurately predict each subject’s performance. Interestingly, the performance of most subjects was optimized when stimulus duration was matched to their kernel size. We also found that the observed kernel size was strongly correlated with the perceptual alternation in bistable conditions. Our result suggests that the observed decision kernel reveals a subject-specific feature of sensory integration.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


2014 ◽  
Vol 7 (1) ◽  
Author(s):  
Vassilios Krassanakis ◽  
Vassiliki Filippakopoulou ◽  
Byron Nakos

Eye movement recordings and their analysis constitute an effective way to examine visual perception. There is a special need for the design of computer software for the performance of data analysis. The present study describes the development of a new toolbox, called EyeMMV (Eye Movements Metrics & Visualizations), for post experimental eye movement analysis. The detection of fixation events is performed with the use of an introduced algorithm based on a two-step spatial dispersion threshold. Furthermore, EyeMMV is designed to support all well-known eye tracking metrics and visualization techniques. The results of fixation identification algorithm are compared with those of an algorithm of dispersion-type with a moving window, imported in another open source analysis tool. The comparison produces outputs that are strongly correlated. The EyeMMV software is developed using the scripting language of MATLAB and the source code is distributed through GitHub under the third version of GNU General Public License (link: https://github.com/krasvas/EyeMMV).


Author(s):  
James Kim

The purpose of this study was to examine factors that influence how people look at objects they will have to act upon while watching others interact with them first. We investigated whether including different types of task-relevant information into an observational learning task would result in participants adapting their gaze towards an object with more task-relevant information. The participant watched an actor simultaneously lift and replace two objects with two hands then was cued to lift one of the two objects. The objects had the potential to change weight between each trial. In our cue condition, participants were cued to lift one of the objects every single time. In our object condition, the participants were cued equally to act on both objects; however, the weights of only one of the objects would have the potential to change. The hypothesis in the cue condition was that the participant would look significantly more at the object being cued. The hypothesis for the object condition was that the participant would look significantly more (i.e. adapt their gaze) at the object changing weight. The rationale behind this is that participants will learn to allocate their gaze significantly more towards that object so they can gain information about its properties (i.e. weight change). Pending results will indicate whether or not this occurred, and has implications for understanding eye movement sequences in visually guided behaviour tasks. The outcome of this study also has implications for the mechanisms of eye gaze with respect to social learning tasks. 


2020 ◽  
Vol 10 (3) ◽  
pp. 51
Author(s):  
DongMin Jang ◽  
IlHo Yang ◽  
SeoungUn Kim

The purpose of this study was to detect mind-wandering experienced by pre-service teachers during a video learning lecture on physics. The lecture was videotaped and consisted of a live lecture in a classroom. The lecture was about Gauss's law on physics. We investigated whether oculomotor data and eye movements could be used as a marker to indicate the learner’s mind-wandering. Each data was collected in a study in which 24 pre-service teachers (16 females and 8 males) reported mind-wandering experience through self-caught method while learning physics video lecture during 30 minutes. A Tobii Pro Spectrum (sampling rate: 300 Hz) was used to capture their eye-gaze during learning Gauss's law through a course video. After watching the video lecture, we interviewed pre-service teachers about their mind-wandering experience. We first used the self-caught method to capture the mind-wandering timing of pre-service teachers while learning from video lectures. We detected more accurate mind-wandering segments by comparing fixation duration and saccade count. We investigated two types of oculomotor data (blink count, pupil size) and nine eye movements (average peak velocity of saccades; maximum peak velocity of saccades; standard deviation of peak velocity of saccades; average amplitude of saccades; maximum amplitude of saccades; total amplitude of saccades; saccade count/s; fixation duration; fixation dispersion). The result was that the blink count could not be used as a marker for mind-wandering during learning video lectures among them (oculomotor data and eye movements), unlike previous literatures. Based on the results of this study, we identified elements that can be used as mind-wandering markers while learning from video lectures that are similar to real classes, among the oculomotor data and eye movement mentioned in previous literatures. Additionally, we found that most participants focused on past thoughts and felt unpleasant after experiencing mind-wandering through interview analysis.


2013 ◽  
Vol 2013 ◽  
pp. 1-13
Author(s):  
Satoshi Suzuki ◽  
Asato Yoshinari ◽  
Kunihiko Kuronuma

For an establishment of a skill evaluation method for human support systems, development of an estimating equation of the machine operational skill is presented. Factors of the eye movement such as frequency, velocity, and moving distance of saccade were computed using the developed eye gaze measurement system, and the eye movement features were determined from these factors. The estimating equation was derived through an outlier test (to eliminate nonstandard data) and a principal component analysis (to find dominant components). Using a cooperative carrying task (cc-task) simulator, the eye movement and operational data of the machine operators were recorded, and effectiveness of the derived estimating equation was investigated. As a result, it was confirmed that the estimating equation was effective strongly against actual simple skill levels (r=0.56–0.84). In addition, effects of internal condition such as fatigue and stress on the estimating equation were analyzed. Using heart rate (HR) and coefficient of variation of R-R interval (Cvrri). Correlation analysis between these biosignal indexes and the estimating equation of operational skill found that the equation reflected effects of stress and fatigue, although the equation could estimate the skill level adequately.


Vision ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 39
Author(s):  
Julie Royo ◽  
Fabrice Arcizet ◽  
Patrick Cavanagh ◽  
Pierre Pouget

We introduce a blind spot method to create image changes contingent on eye movements. One challenge of eye movement research is triggering display changes contingent on gaze. The eye-tracking system must capture the image of the eye, discover and track the pupil and corneal reflections to estimate the gaze position, and then transfer this data to the computer that updates the display. All of these steps introduce delays that are often difficult to predict. To avoid these issues, we describe a simple blind spot method to generate gaze contingent display manipulations without any eye-tracking system and/or display controls.


Sign in / Sign up

Export Citation Format

Share Document