Decomposing visual search: Evidence of multiple item-specific skills.

Author(s):  
Anne P. Hillstrom ◽  
Gordon D. Logan
Keyword(s):  
2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Dustin Venini ◽  
Roger W. Remington ◽  
Gernot Horstmann ◽  
Stefanie I. Becker

In visual search, some fixations are made between stimuli on empty regions, commonly referred to as “centre-of-gravity” fixations (henceforth: COG fixations). Previous studies have shown that observers with task expertise show more COG fixations than novices. This led to the view that COG fixations reflect simultaneous encoding of multiple stimuli, allowing more efficient processing of task-related items. The present study tested whether COG fixations also aid performance in visual search tasks with unfamiliar and abstract stimuli. Moreover, to provide evidence for the multiple-item processing view, we analysed the effects of COG fixations on the number and dwell times of stimulus fixations. The results showed that (1) search efficiency increased with increasing COG fixations even in search for unfamiliar stimuli and in the absence of special higher-order skills, (2) COG fixations reliably reduced the number of stimulus fixations and their dwell times, indicating processing of multiple distractors, and (3) the proportion of COG fixations was dynamically adapted to potential information gain of COG locations. A second experiment showed that COG fixations are diminished when stimulus positions unpredictably vary across trials. Together, the results support the multiple-item processing view, which has important implications for current theories of visual search.


2019 ◽  
Author(s):  
Cherie Zhou ◽  
Monicque M. Lorist ◽  
Sebastiaan Mathôt

AbstractDuring visual search, task-relevant representations in visual working memory (VWM), known as attentional templates, are assumed to guide attention. A current debate concerns whether only one (Single-Item-Template hypothesis, or SIT) or multiple (Multiple-Item-Template hypothesis, or MIT) items can serve as attentional templates simultaneously. The current study was designed to test these two hypotheses. Participants memorized two colors, prior to a visual-search task in which the target and the distractor could match or not match the colors held in VWM. Robust attentional guidance was observed when one of the memory colors was presented as the target (reduced response times [RTs] on target-match trials) or the distractor (increased RTs on distractor-match trials). We constructed two drift-diffusion models that implemented the MIT and SIT hypotheses, which are similar in their predictions about overall RTs, but differ in their predictions about RTs on individual trials. Critically, simulated RT distributions and error rates revealed a better match of the MIT hypothesis to the observed data than the SIT hypothesis. Taken together, our findings provide behavioral and computational evidence for the concurrent guidance of attention by multiple items in VWM.Significance statementTheories differ in how many items within visual working memory can guide attention at the same time. This question is difficult to address, because multiple- and single-item-template theories make very similar predictions about average response times. Here we use drift-diffusion modeling in addition to behavioral data, to model response times at an individual level. Crucially, we find that our model of the multiple-item-template theory predicts human behavior much better than our model of the single-item-template theory; that is, modeling of behavioral data provides compelling evidence for multiple attentional templates that are simultaneously active.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


2008 ◽  
Vol 67 (2) ◽  
pp. 71-83 ◽  
Author(s):  
Yolanda A. Métrailler ◽  
Ester Reijnen ◽  
Cornelia Kneser ◽  
Klaus Opwis

This study compared individuals with pairs in a scientific problem-solving task. Participants interacted with a virtual psychological laboratory called Virtue to reason about a visual search theory. To this end, they created hypotheses, designed experiments, and analyzed and interpreted the results of their experiments in order to discover which of five possible factors affected the visual search process. Before and after their interaction with Virtue, participants took a test measuring theoretical and methodological knowledge. In addition, process data reflecting participants’ experimental activities and verbal data were collected. The results showed a significant but equal increase in knowledge for both groups. We found differences between individuals and pairs in the evaluation of hypotheses in the process data, and in descriptive and explanatory statements in the verbal data. Interacting with Virtue helped all students improve their domain-specific and domain-general psychological knowledge.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


2000 ◽  
Vol 15 (2) ◽  
pp. 286-296 ◽  
Author(s):  
Arthur F. Kramer ◽  
Paul Atchley
Keyword(s):  

Author(s):  
Stanislav Dornic ◽  
Ragnar Hagdahl ◽  
Gote Hanson

1980 ◽  
Author(s):  
Robert C. Carter
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document