scholarly journals Out with the old: New target templates impair the guidance of visual search by preexisting task goals.

2020 ◽  
Vol 149 (6) ◽  
pp. 1156-1168
Author(s):  
Nick Berggren ◽  
Rebecca Nako ◽  
Martin Eimer
2020 ◽  
Vol 82 (6) ◽  
pp. 2909-2923 ◽  
Author(s):  
Bo-Yeong Won ◽  
Jason Haberman ◽  
Eliza Bliss-Moreau ◽  
Joy J. Geng

2020 ◽  
Vol 6 (1) ◽  
pp. 539-562 ◽  
Author(s):  
Jeremy M. Wolfe

In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.


2014 ◽  
Vol 45 (3) ◽  
pp. 528-533 ◽  
Author(s):  
Kait Clark ◽  
Matthew S. Cain ◽  
R. Alison Adcock ◽  
Stephen R. Mitroff

2019 ◽  
Author(s):  
Jonas Sin-Heng Lau ◽  
Harold Pashler ◽  
Timothy F. Brady

When you search repeatedly for a set of items among very similar distractors, does that make you more efficient in locating the targets? To address this, we had observers search for two categories of targets among the same set of distractors across trials. After a few blocks of trials, the distractor set was replaced. In two experiments, we manipulated the level of discriminability between the targets and distractors before and after the distractors were replaced. Our results suggest that in the presence of repeated distractors, observers generally become more efficient. However, the difficulty of the search task does impact how efficient people are when the distractor set is replaced. Specifically, when the training is easy, people are more impaired in a difficult transfer test. We attribute this effect to the precision of the target template generated during training. In particular, a coarse target template is created when the target and distractors are easy to discriminate. These coarse target templates do not transfer well in a context with new distractors. This suggests that learning with more distinct targets and distractors can result in lower performance when context changes, but observers recover from this effect quickly (within a block of search trials).


2017 ◽  
Author(s):  
Johannes J. Fahrenfort ◽  
Jonathan Van Leeuwen ◽  
Joshua J. Foster ◽  
Edward Awh ◽  
Christian N.L. Olivers

AbstractWorking memory is the function by which we temporarily maintain information to achieve current task goals. Models of working memory typically debate where this information is stored, rather than how it is stored. Here we ask instead what neural mechanisms are involved in storage, and how these mechanisms change as a function of task goals. Participants either had to reproduce the orientation of a memorized bar (continuous recall task), or identify the memorized bar in a search array (visual search task). The sensory input and retention interval were identical in both tasks. Next, we used decoding and forward modeling on multivariate electroencephalogram (EEG) and time-frequency decomposed EEG to investigate which neural signals carry more informational content during the retention interval. In the continuous recall task, working memory content was preferentially carried by induced oscillatory alpha-band power, while in the visual search task it was more strongly carried by the distribution of evoked (consistently elevated and non-oscillatory) EEG activity. To show the independence of these two signals, we were able to remove informational content from one signal without affecting informational content in the other. Finally, we show that the tuning characteristics of both signals change in opposite directions depending on the current task goal. We propose that these signals reflect oscillatory and elevated firing-rate mechanisms that respectively support location-based and object-based maintenance. Together, these data challenge current models of working memory that place storage in particular regions, but rather emphasize the importance of different distributed maintenance signals depending on task goals.Significance statement (120 words)Without realizing, we are constantly moving things in and out of our mind’s eye, an ability also referred to as ‘working memory’. Where did I put my screwdriver? Do we still have milk in the fridge? A central question in working memory research is how the brain maintains this information temporarily. Here we show that different neural mechanisms are involved in working memory depending on what the memory is used for. For example, remembering what a bottle of milk looks like invokes a different neural mechanism from remembering how much milk it contains: the first one primarily involved in being able to find the object, and the other one involving spatial position, such as the milk level in the bottle.


2019 ◽  
Author(s):  
Taylor R. Hayes ◽  
John M. Henderson

During scene viewing, is attention primarily guided by low-level image salience or by high-level semantics? Recent evidence suggests that overt attention in scenes is primarily guided by semantic features. Here we examined whether the attentional priority given to meaningful scene regions is involuntary. Participants completed a scene-independent visual search task in which they searched for superimposed letter targets whose locations were orthogonal to both the underlying scene semantics and image salience. Critically, the analyzed scenes contained no targets, and participants were unaware of this manipulation. We then directly compared how well the distribution of semantic features and image salience accounted for the overall distribution of overt attention. The results showed that even when the task was completely independent from the scene semantics and image salience, semantics explained significantly more variance in attention than image salience and more than expected by chance. This suggests that salient image features were effectively suppressed in favor of task goals, but semantic features were not suppressed. The semantic bias was present from the very first fixation and increased non-monotonically over the course of viewing. These findings suggest that overt attention in scenes is involuntarily guided by scene semantics.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


Sign in / Sign up

Export Citation Format

Share Document