Is the Eye Movement Pattern the Same? The Difference Between Automated Driving and Manual Driving

Author(s):  
Qiuyang Tang ◽  
Gang Guo
2021 ◽  
pp. 174702182199851
Author(s):  
Claudia Bonmassar ◽  
Francesco Pavani ◽  
Alessio Di Renzo ◽  
Cristina Caselli ◽  
Wieske van Zoest

Previous research on covert orienting to the periphery suggested that early profound deaf adults were less susceptible to uninformative gaze cues, though were equally or more affected by non-social arrow cues. The aim of the present work was to investigate whether spontaneous eye movement behaviour helps explain the reduced impact of the social cue in deaf adults. We tracked the gaze of 25 early profound deaf and 25 age-matched hearing observers performing a peripheral discrimination task with uninformative central cues (gaze vs. arrow), stimulus-onset asynchrony (250 vs. 750 ms) and cue-validity (valid vs. invalid) as within-subject factors. In both groups, the cue-effect on RT was comparable for the two cues, although deaf observers responded significantly slower than hearing controls. While deaf and hearing observers eye movement pattern looked similar when the cue was presented in isolation, deaf participants made significantly eye movements than hearing controls once the discrimination target appeared. Notably, further analysis of eye movements in the deaf group revealed that independent of cue-type, cue-validity affected saccade landing position, while latency was not modulated by these factors. Saccade landing position was also strongly related to the magnitude of the validity effect on RT, such that the greater the difference in saccade landing position between invalid and valid trials, the greater the difference in manual RT between invalid and valid trials. This work suggests that the contribution of overt selection in central cueing of attention is more prominent in deaf adults and helps determine the manual performance, irrespective of cue-type.


Author(s):  
XIAOWEI WANG ◽  
XIAOXU GENG ◽  
JINKE WANG ◽  
SHINICHI TAMURA

Eye movement analysis provides a new way for disease screening, quantification and assessment. In order to track and analyze eye movement scanpaths under different conditions, this paper proposed the Gaussian mixture-Hidden Markov Model (G-HMM) modeling the eye movement scanpath during saccade, combing with the Time-Shifting Segmentation (TSS) method for model optimization, and also the Linear Discriminant Analysis (LDA) method was utilized to perform the recognition and evaluation tasks based on the multi-dimensional features. In the experiments, 800 real scene images of eye-movement sequences datasets were used, and the experimental results show that the G-HMM method has high specificity for free searching tasks and high sensitivity for prompt object search tasks, while TSS can strengthen the difference of eye movement characteristics, which is conducive to eye movement pattern recognition, especially for search tasks.


2019 ◽  
Vol 24 (4) ◽  
pp. 297-311
Author(s):  
José David Moreno ◽  
José A. León ◽  
Lorena A. M. Arnal ◽  
Juan Botella

Abstract. We report the results of a meta-analysis of 22 experiments comparing the eye movement data obtained from young ( Mage = 21 years) and old ( Mage = 73 years) readers. The data included six eye movement measures (mean gaze duration, mean fixation duration, total sentence reading time, mean number of fixations, mean number of regressions, and mean length of progressive saccade eye movements). Estimates were obtained of the typified mean difference, d, between the age groups in all six measures. The results showed positive combined effect size estimates in favor of the young adult group (between 0.54 and 3.66 in all measures), although the difference for the mean number of fixations was not significant. Young adults make in a systematic way, shorter gazes, fewer regressions, and shorter saccadic movements during reading than older adults, and they also read faster. The meta-analysis results confirm statistically the most common patterns observed in previous research; therefore, eye movements seem to be a useful tool to measure behavioral changes due to the aging process. Moreover, these results do not allow us to discard either of the two main hypotheses assessed for explaining the observed aging effects, namely neural degenerative problems and the adoption of compensatory strategies.


2021 ◽  
Vol 2 (1) ◽  
pp. 17-23
Author(s):  
Subiyanto Subiyanto ◽  
Nira na Nirwa ◽  
Yuniarti Yuniarti ◽  
Yudi Nurul Ihsan ◽  
Eddy Afrianto

The purpose of this study was to determine the hydrodynamic conditions at Bojong Salawe beach. The method used in this research is a quantitative method, where numerical data is collected to support the formation of numerical models such as wind, bathymetry, and tide data. The hydrodynamic model will be made using Mike 21 with the Flow Model FM module to determine the current movement pattern based on the data used. In the west monsoon with a maximum instantaneous speed of 0.04 - 0.08 m/s, while in the east monsoon it moves with a maximum instantaneous speed of 0,4 – 0,44 m/s. The dominant direction of current movement tends to the northeast. The results indicate the current speed during the east monsoon is higher than the west monsoon. The difference in the current speed is also influenced by the tide conditions; higher during high tide and lower during low tide. Monsoons also have a role in the current movements, though the effect is not very significant.


2019 ◽  
Vol 31 (06) ◽  
pp. 1950048
Author(s):  
Takenao Sugi ◽  
Ryosuke Baba ◽  
Yoshitaka Matsuda ◽  
Satoru Goto ◽  
Naruto Egashira ◽  
...  

People with serious movement disabilities due to neurodegenerative diseases have problems in their communication with others. Considerable numbers of communication aid systems have been developed in the past. Especially, some of the systems driven by eye movements are thought to be effective for such people. Electrooculographic (EOG) signal reflects the eye movement and the specific pattern of eye movement can be seen in EOG signals. This paper proposes a communication aid system by extracting the features of EOG. The system consists of a computer, analog-to-digital converter, biological amplifier and two monitors. Two monitors, one for a system user and the other for other people, display the same information. Five items are presented in the monitor, and a user selects those items according to the situation in the communication. Selection of the items is done by combining three eye movements: gaze at left, gaze at right and successive blinks. Basic concept of the communication aid system was designed by taking into account the current state of a subject’s movement disability. Then, the design of a screen and the algorithm for detecting eye movement pattern from EOG were determined by using the data of normal healthy subjects. The system worked almost perfectly for normal healthy subjects. Then, the developed system was operated by a subject with serious movement disability. Parts of the system operation were regarded as satisfactory level, and some miss-operation were also seen.


Author(s):  
Dengbo He ◽  
Birsen Donmez

State-of-the-art vehicle automation requires drivers to visually monitor the driving environment and the automation (through interfaces and vehicle’s actions) and intervene when necessary. However, as evidenced by recent automated vehicle crashes and laboratory studies, drivers are not always able to step in when the automation fails. Research points to the increase in distraction or secondary-task engagement in the presence of automation as a potential reason. However, previous research on secondary-task engagement in automated vehicles mainly focused on experienced drivers. This issue may be amplified for novice drivers with less driving skill. In this paper, we compared secondary-task engagement behaviors of novice and experienced drivers both in manual (non-automated) and automated driving settings in a driving simulator. A self-paced visual-manual secondary task presented on an in-vehicle display was utilized. Phase 1 of the study included 32 drivers (16 novice) who drove the simulator manually. In Phase 2, another set of 32 drivers (16 novice) drove with SAE-level-2 automation. In manual driving, there were no differences between novice and experienced drivers’ rate of manual interactions with the secondary task (i.e., taps on the display). However, with automation, novice drivers had a higher manual interaction rate with the task than experienced drivers. Further, experienced drivers had shorter average glance durations toward the task than novice drivers in general, but the difference was larger with automation compared with manual driving. It appears that with automation, experienced drivers are more conservative in their secondary-task engagement behaviors compared with novice drivers.


Sign in / Sign up

Export Citation Format

Share Document