scholarly journals Combining EEG and Eye Tracking: Using Fixation-Locked Potentials in Visual Search

2013 ◽  
Vol 6 (4) ◽  
Author(s):  
Brent Winslow ◽  
Angela Carpenter ◽  
Jesse Flint ◽  
Xuezhong Wang ◽  
David Tomasetti ◽  
...  

Visual search is a complex task that involves many neural pathways to identify relevant areas of interest within a scene. Humans remain a critical component in visual search tasks, as they can effectively perceive anomalies within complex scenes. However, this task can be challenging, particularly under time pressure. In order to improve visual search training and performance, an objective, process-based measure is needed. Eye tracking technology can be used to drive real-time parsing of EEG recordings, providing an indication of the analysis process. In the current study, eye fixations were used to generate ERPs during a visual search task. Clear differences were observed following performance, suggesting that neurophysiological signatures could be developed to prevent errors in visual search tasks.

Author(s):  
Tobias Rieger ◽  
Lydia Heilmann ◽  
Dietrich Manzey

AbstractVisual inspection of luggage using X-ray technology at airports is a time-sensitive task that is often supported by automated systems to increase performance and reduce workload. The present study evaluated how time pressure and automation support influence visual search behavior and performance in a simulated luggage screening task. Moreover, we also investigated how target expectancy (i.e., targets appearing in a target-often location or not) influenced performance and visual search behavior. We used a paradigm where participants used the mouse to uncover a portion of the screen which allowed us to track how much of the stimulus participants uncovered prior to their decision. Participants were randomly assigned to either a high (5-s time per trial) or a low (10-s time per trial) time-pressure condition. In half of the trials, participants were supported by an automated diagnostic aid (85% reliability) in deciding whether a threat item was present. Moreover, within each half, in target-present trials, targets appeared in a predictable location (i.e., 70% of targets appeared in the same quadrant of the image) to investigate effects of target expectancy. The results revealed better detection performance with low time pressure and faster response times with high time pressure. There was an overall negative effect of automation support because the automation was only moderately reliable. Participants also uncovered a smaller amount of the stimulus under high time pressure in target-absent trials. Target expectancy of target location improved accuracy, speed, and the amount of uncovered space needed for the search.Significance Statement Luggage screening is a safety–critical real-world visual search task which often has to be done under time pressure. The present research found that time pressure compromises performance and increases the risk to miss critical items even with automation support. Moreover, even highly reliable automated support may not improve performance if it does not exceed the manual capabilities of the human screener. Lastly, the present research also showed that heuristic search strategies (e.g., areas where targets appear more often) seem to guide attention also in luggage screening.


2018 ◽  
Author(s):  
Alasdair D F Clarke ◽  
Jessica Irons ◽  
Warren James ◽  
Andrew B. Leber ◽  
Amelia R. Hunt

A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here we ask if an individual's strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers in the three tasks mentioned above over two sessions. Even though the test-retest reliability of the tasks is high, an observer's performance and strategy in one task did not reliably predict their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search we therefore need to account not only for differences between individuals, but also how individuals interact with the search task and context. These context-specific but stable individual differences in strategy can account for a substantial proportion of variability in search performance.


2020 ◽  
pp. 174702182092919 ◽  
Author(s):  
Alasdair DF Clarke ◽  
Jessica L Irons ◽  
Warren James ◽  
Andrew B Leber ◽  
Amelia R Hunt

A striking range of individual differences has recently been reported in three different visual search tasks. These differences in performance can be attributed to strategy, that is, the efficiency with which participants control their search to complete the task quickly and accurately. Here, we ask whether an individual’s strategy and performance in one search task is correlated with how they perform in the other two. We tested 64 observers and found that even though the test–retest reliability of the tasks was high, an observer’s performance and strategy in one task was not predictive of their behaviour in the other two. These results suggest search strategies are stable over time, but context-specific. To understand visual search, we therefore need to account not only for differences between individuals but also how individuals interact with the search task and context.


2014 ◽  
Author(s):  
Tommy P. Keane ◽  
Nathan D. Cahill ◽  
John A. Tarduno ◽  
Robert A. Jacobs ◽  
Jeff B. Pelz

2021 ◽  
pp. 174702182110502
Author(s):  
Azuwan Musa ◽  
Alison R Lane ◽  
Amanda Ellison

Visual search is a task often used in the rehabilitation of patients with cortical and non-cortical visual pathologies such as visual field loss. Reduced visual acuity is often comorbid with these disorders, and it remains poorly defined how low visual acuity may affect a patient’s ability to recover visual function through visual search training. The two experiments reported here investigated whether induced blurring of vision (from 6/15 to 6/60) in a neurotypical population differentially affected various types of feature search tasks, whether there is a minimal acceptable level of visual acuity required for normal search performance, and whether these factors affected the degree to which participants could improve with training. From the results, it can be seen that reducing visual acuity did reduce search speed, but only for tasks where the target was defined by shape or size (not colour), and only when acuity was worse than 6/15. Furthermore, searching behaviour was seen to improve with training in all three feature search tasks, irrespective of the degree of blurring that was induced. The improvement also generalised to a non-trained search task, indicating that an enhanced search strategy had been developed. These findings have important implications for the use of visual search as a rehabilitation aid for partial visual loss, indicating that individuals with even severe comorbid blurring should still be able to benefit from such training.


2017 ◽  
Vol 36 (7) ◽  
pp. 737-744 ◽  
Author(s):  
Christina Bröhl ◽  
Sabine Theis ◽  
Peter Rasche ◽  
Matthias Wille ◽  
Alexander Mertens ◽  
...  

2017 ◽  
Author(s):  
David Yates ◽  
Tom Stafford

Recent evidence suggests that participants perform better on some visual search tasks when they are instructed to search the display passively (i.e. letting the unique item “pop” into mind) rather than actively (Smilek, Enns, Eastwood, & Merikle, 2006; Watson, Brennan, Kingstone, & Enns, 2010). We extended these findings using eye tracking, a neutral baseline condition (Experiment 1) and testing visual search over a wider range of eccentricies (10 ◦ –30 ◦ , Experiment 2). We show that the passive instructions led to participants delaying their initial saccade compared to participants given active or neutral instructions. Despite taking longer to start searching the display, passive participants then find and respond to the target faster. We show that this benefit does not extend to search where items were distributed in the true periphery.


2021 ◽  
Vol 3 ◽  
Author(s):  
Mildred Loiseau-Taupin ◽  
Alexis Ruffault ◽  
Jean Slawinski ◽  
Lucile Delabarre ◽  
Dimitri Bayle

In badminton, the ability to quickly gather relevant visual information is one of the most important determinants of performance. However, gaze behavior has never been investigated in a real-game setting (with fatigue), nor related to performance. The aim of this study was to evaluate the effect of fatigue on gaze behavior during a badminton game setting, and to determine the relationship between fatigue, performance and gaze behavior. Nineteen novice badminton players equipped with eye-tracking glasses played two badminton sets: one before and one after a fatiguing task. The duration and number of fixations for each exchange were evaluated for nine areas of interest. Performance in terms of points won or lost and successful strokes was not impacted by fatigue, however fatigue induced more fixations per exchange on two areas of interest (shuttlecock and empty area after the opponent's stroke). Furthermore, two distinct gaze behaviors were found for successful and unsuccessful performance: points won were associated with fixations on the boundary lines and few fixation durations on empty area before the participant's stroke; successful strokes were related to long fixation durations, few fixation durations on empty area and a large number of fixations on the shuttlecock, racket, opponent's upper body and anticipation area. This is the first study to use a mobile eye-tracking system to capture gaze behavior during a real badminton game setting: fatigue induced changes in gaze behavior, and successful and unsuccessful performance were associated with two distinct gaze behaviors.


2017 ◽  
Vol 14 (132) ◽  
pp. 20170406 ◽  
Author(s):  
Tatiana A. Amor ◽  
Mirko Luković ◽  
Hans J. Herrmann ◽  
José S. Andrade

When searching for a target within an image, our brain can adopt different strategies, but which one does it choose? This question can be answered by tracking the motion of the eye while it executes the task. Following many individuals performing various search tasks, we distinguish between two competing strategies. Motivated by these findings, we introduce a model that captures the interplay of the search strategies and allows us to create artificial eye-tracking trajectories, which could be compared with the experimental ones. Identifying the model parameters allows us to quantify the strategy employed in terms of ensemble averages, characterizing each experimental cohort. In this way, we can discern with high sensitivity the relation between the visual landscape and the average strategy, disclosing how small variations in the image induce changes in the strategy.


Sign in / Sign up

Export Citation Format

Share Document