scholarly journals No advantage for separating overt and covert attention in visual search

2020 ◽  
Author(s):  
Joseph MacInnes ◽  
Ómar I. Jóhannesson ◽  
Andrey Chetverikov ◽  
Arni Kristjansson

We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze contingent display with a second task where a similarly sized contingent window is controlled with a mouse allowing a covert aperture to be controlled independently from overt gaze. Larger apertures improved performance for both mouse and gaze contingent trials suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse controlled aperture independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.

Vision ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 28
Author(s):  
W. Joseph MacInnes ◽  
Ómar I. Jóhannesson ◽  
Andrey Chetverikov ◽  
Árni Kristjánsson

We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze-contingent display, with a second task where a similarly sized contingent window is controlled with a mouse, allowing a covert aperture to be controlled independently by overt gaze. Larger apertures improved performance for both the mouse- and gaze-contingent trials, suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse-controlled aperture somewhat independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.


Author(s):  
Ulrich Engelke ◽  
Andreas Duenser ◽  
Anthony Zeater

Selective attention is an important cognitive resource to account for when designing effective human-machine interaction and cognitive computing systems. Much of our knowledge about attention processing stems from search tasks that are usually framed around Treisman's feature integration theory and Wolfe's Guided Search. However, search performance in these tasks has mainly been investigated using an overt attention paradigm. Covert attention on the other hand has hardly been investigated in this context. To gain a more thorough understanding of human attentional processing and especially covert search performance, the authors have experimentally investigated the relationship between overt and covert visual search for targets under a variety of target/distractor combinations. The overt search results presented in this work agree well with the Guided Search studies by Wolfe et al. The authors show that the response times are considerably more influenced by the target/distractor combination than by the attentional search paradigm deployed. While response times are similar between the overt and covert search conditions, they found that error rates are considerably higher in covert search. They further show that response times between participants are stronger correlated as the search task complexity increases. The authors discuss their findings and put them into the context of earlier research on visual search.


2020 ◽  
Author(s):  
Han Zhang

Mind-wandering (MW) is ubiquitous and is associated with reduced performance across a wide range of tasks. Recent studies have shown that MW can be related to changes in gaze parameters. In this dissertation, I explored the link between eye movements and MW in three different contexts that involve complex cognitive processing: visual search, scene perception, and reading comprehension. Study 1 examined how MW affects visual search performance, particularly the ability to suppress salient but irrelevant distractors during visual search. Study 2 used a scene encoding task to study how MW affects how eye movements change over time and their relationship with scene content. Study 3 examined how MW affects readers’ ability to detect semantic incongruities in the text and make necessary revisions of their understanding as they read jokes. All three studies showed that MW was associated with decreased task performance at the behavioral level (e.g., response time, recognition, and recall). Eye-tracking further showed that these behavioral costs can be traced to deficits in specific cognitive processes. The final chapter of this dissertation explored whether there are context-independent eye movement features of MW. MW manifests itself in different ways depending on task characteristics. In tasks that require extensive sampling of the stimuli (e.g., reading and scene viewing), MW was related to a global reduction in visual processing. But this was not the case for the search task, which involved speeded, simple visual processing. MW was instead related to increased looking time on the target after it was already located. MW affects the coupling between cognitive efforts and task demands, but the nature of this decoupling depends on the specific features of particular tasks.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jane W. Couperus ◽  
Kirsten O. Lydic ◽  
Juniper E. Hollis ◽  
Jessica L. Roy ◽  
Amy R. Lowe ◽  
...  

The lateralized ERP N2pc component has been shown to be an effective marker of attentional object selection when elicited in a visual search task, specifically reflecting the selection of a target item among distractors. Moreover, when targets are known in advance, the visual search process is guided by representations of target features held in working memory at the time of search, thus guiding attention to objects with target-matching features. Previous studies have shown that manipulating working memory availability via concurrent tasks or within task manipulations influences visual search performance and the N2pc. Other studies have indicated that visual (non-spatial) vs. spatial working memory manipulations have differential contributions to visual search. To investigate this the current study assesses participants' visual and spatial working memory ability independent of the visual search task to determine whether such individual differences in working memory affect task performance and the N2pc. Participants (n = 205) completed a visual search task to elicit the N2pc and separate visual working memory (VWM) and spatial working memory (SPWM) assessments. Greater SPWM, but not VWM, ability is correlated with and predicts higher visual search accuracy and greater N2pc amplitudes. Neither VWM nor SPWM was related to N2pc latency. These results provide additional support to prior behavioral and neural visual search findings that spatial WM availability, whether as an ability of the participant's processing system or based on task demands, plays an important role in efficient visual search.


2022 ◽  
Author(s):  
Qi Zhang ◽  
Zhibang Huang ◽  
Liang Li ◽  
Sheng Li

Visual search in a complex environment requires efficient discrimination between target and distractors. Training serves as an effective approach to improve visual search performance when the target does not automatically pop out from the distractors. In the present study, we trained subjects on a conjunction visual search task and examined the training effects in behavior and eye movement from Experiments 1 to 4. The results showed that training improved behavioral performance and reduced the number of saccades and overall scanning time. Training also increased the search initiation time before the first saccade and the proportion of trials in which the subjects correctly identified the target without any saccade, but these effects were modulated by stimulus' parameters. In Experiment 5, we replicated these training effects when eye movements and EEG signals were recorded simultaneously. The results revealed significant N2pc components after the stimulus onset (i.e., stimulus-locked) and before the first saccade (i.e., saccade-locked) when the search target was the trained one. These N2pc components can be considered as the neural signatures for the training-induced boost of covert attention to the trained target. The enhanced covert attention led to a beneficial tradeoff between search initiation time and the number of saccades as a small amount of increase in search initiation time could result in a larger reduction in scanning time. These findings suggest that the enhanced covert attention to target and optimized overt eye movements are coordinated together to facilitate visual search training.


2015 ◽  
Vol 74 (1) ◽  
pp. 55-60 ◽  
Author(s):  
Alexandre Coutté ◽  
Gérard Olivier ◽  
Sylvane Faure

Computer use generally requires manual interaction with human-computer interfaces. In this experiment, we studied the influence of manual response preparation on co-occurring shifts of attention to information on a computer screen. The participants were to carry out a visual search task on a computer screen while simultaneously preparing to reach for either a proximal or distal switch on a horizontal device, with either their right or left hand. The response properties were not predictive of the target’s spatial position. The results mainly showed that the preparation of a manual response influenced visual search: (1) The visual target whose location was congruent with the goal of the prepared response was found faster; (2) the visual target whose location was congruent with the laterality of the response hand was found faster; (3) these effects have a cumulative influence on visual search performance; (4) the magnitude of the influence of the response goal on visual search is marginally negatively correlated with the rapidity of response execution. These results are discussed in the general framework of structural coupling between perception and motor planning.


2001 ◽  
Author(s):  
Jason S. McCarley ◽  
Matthew S. Peterson ◽  
Arthur F. Kramer ◽  
Ranxiao Frances Wang ◽  
David E. Irwin

Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 228
Author(s):  
Sze-Ying Lam ◽  
Alexandre Zénon

Previous investigations concluded that the human brain’s information processing rate remains fundamentally constant, irrespective of task demands. However, their conclusion rested in analyses of simple discrete-choice tasks. The present contribution recasts the question of human information rate within the context of visuomotor tasks, which provides a more ecologically relevant arena, albeit a more complex one. We argue that, while predictable aspects of inputs can be encoded virtually free of charge, real-time information transfer should be identified with the processing of surprises. We formalise this intuition by deriving from first principles a decomposition of the total information shared by inputs and outputs into a feedforward, predictive component and a feedback, error-correcting component. We find that the information measured by the feedback component, a proxy for the brain’s information processing rate, scales with the difficulty of the task at hand, in agreement with cost-benefit models of cognitive effort.


Author(s):  
Gwendolyn Rehrig ◽  
Reese A. Cullimore ◽  
John M. Henderson ◽  
Fernanda Ferreira

Abstract According to the Gricean Maxim of Quantity, speakers provide the amount of information listeners require to correctly interpret an utterance, and no more (Grice in Logic and conversation, 1975). However, speakers do tend to violate the Maxim of Quantity often, especially when the redundant information improves reference precision (Degen et al. in Psychol Rev 127(4):591–621, 2020). Redundant (non-contrastive) information may facilitate real-world search if it narrows the spatial scope under consideration, or improves target template specificity. The current study investigated whether non-contrastive modifiers that improve reference precision facilitate visual search in real-world scenes. In two visual search experiments, we compared search performance when perceptually relevant, but non-contrastive modifiers were included in the search instruction. Participants (NExp. 1 = 48, NExp. 2 = 48) searched for a unique target object following a search instruction that contained either no modifier, a location modifier (Experiment 1: on the top left, Experiment 2: on the shelf), or a color modifier (the black lamp). In Experiment 1 only, the target was located faster when the verbal instruction included either modifier, and there was an overall benefit of color modifiers in a combined analysis for scenes and conditions common to both experiments. The results suggest that violations of the Maxim of Quantity can facilitate search when the violations include task-relevant information that either augments the target template or constrains the search space, and when at least one modifier provides a highly reliable cue. Consistent with Degen et al. (2020), we conclude that listeners benefit from non-contrastive information that improves reference precision, and engage in rational reference comprehension. Significance statement This study investigated whether providing more information than someone needs to find an object in a photograph helps them to find that object more easily, even though it means they need to interpret a more complicated sentence. Before searching a scene, participants were either given information about where the object would be located in the scene, what color the object was, or were only told what object to search for. The results showed that providing additional information helped participants locate an object in an image more easily only when at least one piece of information communicated what part of the scene the object was in, which suggests that more information can be beneficial as long as that information is specific and helps the recipient achieve a goal. We conclude that people will pay attention to redundant information when it supports their task. In practice, our results suggest that instructions in other contexts (e.g., real-world navigation, using a smartphone app, prescription instructions, etc.) can benefit from the inclusion of what appears to be redundant information.


Sign in / Sign up

Export Citation Format

Share Document