Distracting Objects Induce Early Quitting in Visual Search

2019 ◽  
Vol 31 (1) ◽  
pp. 31-42
Author(s):  
Jeff Moher

Task-irrelevant objects can sometimes capture attention and increase the time it takes an observer to find a target. However, less is known about how these distractors impact visual search strategies. Here, I found that salient distractors reduced rather than increased response times on target-absent trials (Experiment 1; N = 200). Combined with higher error rates on target-present trials, these results indicate that distractors can induce observers to quit search earlier than they otherwise would. These effects were replicated when target prevalence was low (Experiment 2; N = 200) and with different stimuli that elicited shallower search slopes (Experiment 3; N = 75). These results demonstrate that salient distractors can produce at least two consequences in visual search: They can capture attention, and they can cause observers to quit searching early. This novel finding has implications both for understanding visual attention and for examining distraction in real-world domains where targets are often absent, such as medical image screening.

2006 ◽  
Vol 18 (4) ◽  
pp. 604-613 ◽  
Author(s):  
Clayton Hickey ◽  
John J. McDonald ◽  
Jan Theeuwes

We investigated the ability of salient yet task-irrelevant stimuli to capture attention in two visual search experiments. Participants were presented with circular search arrays that contained a highly salient distractor singleton defined by color and a less salient target singleton defined by form. A component of the event-related potential called the N2pc was used to track the allocation of attention to lateralized positions in the arrays. In Experiment 1, a lateralized distractor elicited an N2pc when a concurrent target was presented on the vertical meridian and thus could not elicit lateralized components such as the N2pc. A similar distractor-elicited N2pc was found in Experiment 2, which was conducted to rule out certain voluntary search strategies. Additionally, in Experiment 2 both the distractor and the target elicited the N2pc component when the two stimuli were presented on opposite sides of the search array. Critically, the distractor-elicited N2pc preceded the target-elicited N2pc on these trials. These results demonstrate that participants shifted attention to the target only after shifting attention to the more salient but task-irrelevant distractor. This pattern of results is in line with theories of attention in which stimulus-driven control plays an integral role.


Author(s):  
Samia Hussein

The present study examined the effect of scene context on guidance of attention during visual search in real‐world scenes. Prior research has demonstrated that when searching for an object, attention is usually guided to the region of a scene that most likely contains that target object. This study examined two possible mechanisms of attention that underlie efficient search: enhancement of attention (facilitation) and a deficiency of attention (inhibition). In this study, participants (N=20) were shown an object name and then required to search through scenes for the target while their eye movements were tracked. Scenes were divided into target‐relevant contextual regions (upper, middle, lower) and participants searched repeatedly in the same scene for different targets either in the same region or in different regions. Comparing repeated searches within the same scene across different regions, we expect to find that visual search is faster and more efficient (facilitation of attention) in regions of a scene where attention was previously deployed. At the same time, when searching across different regions, we expect searches to be slower and less efficient (inhibition of attention) because those regions were previously ignored. Results from this study help to better understand how mechanisms of visual attention operate within scene contexts during visual search. 


Author(s):  
Ulrich Engelke ◽  
Andreas Duenser ◽  
Anthony Zeater

Selective attention is an important cognitive resource to account for when designing effective human-machine interaction and cognitive computing systems. Much of our knowledge about attention processing stems from search tasks that are usually framed around Treisman's feature integration theory and Wolfe's Guided Search. However, search performance in these tasks has mainly been investigated using an overt attention paradigm. Covert attention on the other hand has hardly been investigated in this context. To gain a more thorough understanding of human attentional processing and especially covert search performance, the authors have experimentally investigated the relationship between overt and covert visual search for targets under a variety of target/distractor combinations. The overt search results presented in this work agree well with the Guided Search studies by Wolfe et al. The authors show that the response times are considerably more influenced by the target/distractor combination than by the attentional search paradigm deployed. While response times are similar between the overt and covert search conditions, they found that error rates are considerably higher in covert search. They further show that response times between participants are stronger correlated as the search task complexity increases. The authors discuss their findings and put them into the context of earlier research on visual search.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
Tobias Feldmann-Wüstefeld ◽  
Anna Schubö

Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.


2020 ◽  
Vol 148 ◽  
pp. 105785
Author(s):  
Leandro L. Di Stasi ◽  
Carolina Diaz-Piedra ◽  
José M. Morales ◽  
Anton Kurapov ◽  
Mariaelena Tagliabue ◽  
...  

2013 ◽  
Vol 4 ◽  
Author(s):  
Ryoichi Nakashima ◽  
Kazufumi Kobayashi ◽  
Eriko Maeda ◽  
Takeharu Yoshikawa ◽  
Kazuhiko Yokosawa

Author(s):  
Bartholomew Elias

Since the auditory system is not spatially restricted like the visual system, spatial auditory cues can provide information regarding object position, velocity, and trajectory beyond the field of view. A laboratory experiment was conducted to demonstrate that visual displays can be augmented with dynamic spatial auditory cues that provide information regarding the motion characteristics of unseen objects. In this study, dynamic spatial auditory cues presented through headphones conveyed preview information regarding target position, velocity, and trajectory beyond the field of view in a dynamic visual search task in which subjects acquired and identified moving visual targets that traversed a display cluttered with varying numbers of moving distractors. The provision of spatial auditory preview significantly reduced response times to acquire and identify the visual targets and significantly reduced error rates, especially in cases when the visual display load was high. These findings demonstrate that providing dynamic spatial auditory preview cues is a viable mechanism for augmenting visual search performance in dynamic task environments.


2019 ◽  
Vol 19 (10) ◽  
pp. 8c
Author(s):  
Lara García-Delgado ◽  
Miguel Luengo-Oroz ◽  
Daniel Cuadrado ◽  
María Postigo

Sign in / Sign up

Export Citation Format

Share Document