scholarly journals Neural basis of feature-based contextual effects on visual search behavior

Author(s):  
Kelly Shen ◽  
Martin Paré
2007 ◽  
Author(s):  
Madoka Takahashi ◽  
Kazunobu Fukuhara ◽  
Motonobu Ishii

Author(s):  
Tobias Rieger ◽  
Lydia Heilmann ◽  
Dietrich Manzey

AbstractVisual inspection of luggage using X-ray technology at airports is a time-sensitive task that is often supported by automated systems to increase performance and reduce workload. The present study evaluated how time pressure and automation support influence visual search behavior and performance in a simulated luggage screening task. Moreover, we also investigated how target expectancy (i.e., targets appearing in a target-often location or not) influenced performance and visual search behavior. We used a paradigm where participants used the mouse to uncover a portion of the screen which allowed us to track how much of the stimulus participants uncovered prior to their decision. Participants were randomly assigned to either a high (5-s time per trial) or a low (10-s time per trial) time-pressure condition. In half of the trials, participants were supported by an automated diagnostic aid (85% reliability) in deciding whether a threat item was present. Moreover, within each half, in target-present trials, targets appeared in a predictable location (i.e., 70% of targets appeared in the same quadrant of the image) to investigate effects of target expectancy. The results revealed better detection performance with low time pressure and faster response times with high time pressure. There was an overall negative effect of automation support because the automation was only moderately reliable. Participants also uncovered a smaller amount of the stimulus under high time pressure in target-absent trials. Target expectancy of target location improved accuracy, speed, and the amount of uncovered space needed for the search.Significance Statement Luggage screening is a safety–critical real-world visual search task which often has to be done under time pressure. The present research found that time pressure compromises performance and increases the risk to miss critical items even with automation support. Moreover, even highly reliable automated support may not improve performance if it does not exceed the manual capabilities of the human screener. Lastly, the present research also showed that heuristic search strategies (e.g., areas where targets appear more often) seem to guide attention also in luggage screening.


2016 ◽  
Author(s):  
Johannes Jacobus Fahrenfort ◽  
Anna Grubert ◽  
Christian N. L. Olivers ◽  
Martin Eimer

AbstractThe primary electrophysiological marker of feature-based selection is the N2pc, a lateralized posterior negativity emerging around 180-200 ms. As it relies on hemispheric differences, its ability to discriminate the locus of focal attention is severely limited. Here we demonstrate that multivariate analyses of raw EEG data provide a much more fine-grained spatial profile of feature-based target selection. When training a pattern classifier to determine target position from EEG, we were able to decode target positions on the vertical midline, which cannot be achieved using standard N2pc methodology. Next, we used a forward encoding model to construct a channel tuning function that describes the continuous relationship between target position and multivariate EEG in an eight-position display. This model can spatially discriminate individual target positions in these displays and is fully invertible, enabling us to construct hypothetical topographic activation maps for target positions that were never used. When tested against the real pattern of neural activity obtained from a different group of subjects, the constructed maps from the forward model turned out statistically indistinguishable, thus providing independent validation of our model. Our findings demonstrate the power of multivariate EEG analysis to track feature-based target selection with high spatial and temporal precision.Significance StatementFeature-based attentional selection enables observers to find objects in their visual field. The spatiotemporal profile of this process is difficult to assess with standard electrophysiological methods, which rely on activity differences between cerebral hemispheres. We demonstrate that multivariate analyses of EEG data can track target selection across the visual field with high temporal and spatial resolution. Using a forward model, we were able to capture the continuous relationship between target position and EEG measurements, allowing us to reconstruct the distribution of cortical activity for target locations that were never shown during the experiment. Our findings demonstrate the existence of a temporally and spatially precise EEG signal that can be used to study the neural basis of feature-based attentional selection.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Nilofar Babadi ◽  
Behrouz Abdoli ◽  
Alireza Farsi ◽  
Samira Moeinirad

Author(s):  
David Shinar ◽  
Edward D. McDowell ◽  
Nick J. Rackoff ◽  
Thomas H. Rockwell

This paper reports on two studies that examined the relationship between field dependence and on-the-road visual search behavior. In the first study, concerned with eye movements in curve negotiation, it was found that field-dependent subjects have a less effective visual search pattern. In the second study, young and aged drivers were compared on several information processing tasks and on their ability to maintain their eyes closed part of the time while driving. Of the various information processing tasks, only field dependence and visual search time correlated significantly with the mean time the drivers needed to maintain their eyes open while driving, Together the two studies indicate that field dependent subjects require more time to process the available visual information and are less effective in their visual search pattern.


2014 ◽  
Vol 34 (26) ◽  
pp. 8662-8664 ◽  
Author(s):  
J. J. Foster ◽  
K. C. S. Adam
Keyword(s):  

NeuroImage ◽  
2009 ◽  
Vol 45 (3) ◽  
pp. 993-1001 ◽  
Author(s):  
Ping Wei ◽  
Hermann J. Müller ◽  
Stefan Pollmann ◽  
Xiaolin Zhou

2017 ◽  
Vol 40 ◽  
Author(s):  
Laurent Itti

AbstractHulleman & Olivers (H&O) make a much-needed stride forward for a better understanding of visual search behavior by rejecting theories based on discrete stimulus items. I propose that the framework could be further enhanced by clearly delineating distinct mechanisms for attention guidance, selection, and enhancement during visual search, instead of conflating them into a single functional field of view.


Sign in / Sign up

Export Citation Format

Share Document