scholarly journals Posterior alpha EEG dynamics dissociate visual search template from accessory memory items.

2016 ◽  
Vol 16 (12) ◽  
pp. 761 ◽  
Author(s):  
Ingmar de Vries ◽  
Joram van Driel ◽  
Christian Olivers
2021 ◽  
pp. 095679762199666
Author(s):  
Oryah C. Lancry-Dayan ◽  
Matthias Gamer ◽  
Yoni Pertzov

Can you efficiently look for something even without knowing what it looks like? According to theories of visual search, the answer is no: A template of the search target must be maintained in an active state to guide search for potential locations of the target. Here, we tested the need for an active template by assessing a case in which this template is improbable: the search for a familiar face among unfamiliar ones when the identity of the target face is unknown. Because people are familiar with hundreds of faces, an active guiding template seems unlikely in this case. Nevertheless, participants (35 Israelis and 33 Germans) were able to guide their search as long as extrafoveal processing of the target features was possible. These results challenge current theories of visual search by showing that guidance can rely on long-term memory and extrafoveal processing rather than on an active search template.


2021 ◽  
Vol 12 ◽  
Author(s):  
Rebecca E. Rhodes ◽  
Hannah P. Cowley ◽  
Jay G. Huang ◽  
William Gray-Roncal ◽  
Brock A. Wester ◽  
...  

Aerial images are frequently used in geospatial analysis to inform responses to crises and disasters but can pose unique challenges for visual search when they contain low resolution, degraded information about color, and small object sizes. Aerial image analysis is often performed by humans, but machine learning approaches are being developed to complement manual analysis. To date, however, relatively little work has explored how humans perform visual search on these tasks, and understanding this could ultimately help enable human-machine teaming. We designed a set of studies to understand what features of an aerial image make visual search difficult for humans and what strategies humans use when performing these tasks. Across two experiments, we tested human performance on a counting task with a series of aerial images and examined the influence of features such as target size, location, color, clarity, and number of targets on accuracy and search strategies. Both experiments presented trials consisting of an aerial satellite image; participants were asked to find all instances of a search template in the image. Target size was consistently a significant predictor of performance, influencing not only accuracy of selections but the order in which participants selected target instances in the trial. Experiment 2 demonstrated that the clarity of the target instance and the match between the color of the search template and the color of the target instance also predicted accuracy. Furthermore, color also predicted the order of selecting instances in the trial. These experiments establish not only a benchmark of typical human performance on visual search of aerial images but also identify several features that can influence the task difficulty level for humans. These results have implications for understanding human visual search on real-world tasks and when humans may benefit from automated approaches.


2013 ◽  
Vol 13 (9) ◽  
pp. 699-699
Author(s):  
R. Reeder ◽  
M. Peelen

2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


2008 ◽  
Vol 67 (2) ◽  
pp. 71-83 ◽  
Author(s):  
Yolanda A. Métrailler ◽  
Ester Reijnen ◽  
Cornelia Kneser ◽  
Klaus Opwis

This study compared individuals with pairs in a scientific problem-solving task. Participants interacted with a virtual psychological laboratory called Virtue to reason about a visual search theory. To this end, they created hypotheses, designed experiments, and analyzed and interpreted the results of their experiments in order to discover which of five possible factors affected the visual search process. Before and after their interaction with Virtue, participants took a test measuring theoretical and methodological knowledge. In addition, process data reflecting participants’ experimental activities and verbal data were collected. The results showed a significant but equal increase in knowledge for both groups. We found differences between individuals and pairs in the evaluation of hypotheses in the process data, and in descriptive and explanatory statements in the verbal data. Interacting with Virtue helped all students improve their domain-specific and domain-general psychological knowledge.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


2000 ◽  
Vol 15 (2) ◽  
pp. 286-296 ◽  
Author(s):  
Arthur F. Kramer ◽  
Paul Atchley
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document