scholarly journals Greater discrimination difficulty during perceptual learning leads to stronger and more distinct representations

2019 ◽  
Author(s):  
Vencislav Popov ◽  
Lynne Reder

Despite the conventional wisdom that it is more difficult to find a target among similar distractors, this study demonstrates that the advantage of searching for a target among dissimilar distractors is short-lived, and that high target-to-distractor (TD) similarity during visual search training can have beneficial effects for learning. Participants with no prior knowledge of Chinese performed 12 hour-long sessions over 4 weeks, where they had to find a briefly presented target character among a set of distractors. At the beginning of the experiment, high TD similarity hurt performance, but the effect reversed during the first session and remained positive throughout the remaining sessions. This effect was due primarily to reducing false alarms on trials in which the target was absent from the search display. In addition, making an error on a trial with a specific character was associated with slower visual search RTs on the subsequent repetition of the character, suggesting that participants paid more attention in encoding the characters after false alarms. Finally, the benefit of high TD similarity during visual search training transferred to a subsequent n-back working memory task. These results suggest that greater discrimination difficulty likely induces stronger and more distinct representations of each character.

Author(s):  
David Soto ◽  
Glyn W. Humphreys

Recent research has shown that the contents of working memory (WM) can guide the early deployment of attention in visual search. Here, we assessed whether this guidance occurred for all attributes of items held in WM, or whether effects are based on just the attributes relevant for the memory task. We asked observers to hold in memory just the shape of a coloured object and to subsequently search for a target line amongst distractor lines, each embedded within a different object. On some trials, one of the objects in the search display could match the shape, the colour or both dimensions of the cue, but this object never contained the relevant target line. Relative to a neutral baseline, where there was no match between the memory and the search displays, search performance was impaired when a distractor object matched both the colour and the shape of the memory cue. The implications for the understanding of the interaction between WM and selection are discussed.


2019 ◽  
Vol 31 (7) ◽  
pp. 1079-1090 ◽  
Author(s):  
Peter S. Whitehead ◽  
Mathilde M. Ooi ◽  
Tobias Egner ◽  
Marty G. Woldorff

The contents of working memory (WM) guide visual attention toward matching features, with visual search being faster when the target and a feature of an item held in WM spatially overlap (validly cued) than when they occur at different locations (invalidly cued). Recent behavioral studies have indicated that attentional capture by WM content can be modulated by cognitive control: When WM cues are reliably helpful to visual search (predictably valid), capture is enhanced, but when reliably detrimental (predictably invalid), capture is attenuated. The neural mechanisms underlying this effect are not well understood, however. Here, we leveraged the high temporal resolution of ERPs time-locked to the onset of the search display to determine how and at what processing stage cognitive control modulates the search process. We manipulated predictability by grouping trials into unpredictable (50% valid/invalid) and predictable (100% valid, 100% invalid) blocks. Behavioral results confirmed that predictability modulated WM-related capture. Comparison of ERPs to the search arrays showed that the N2pc, a posteriorly distributed signature of initial attentional orienting toward a lateralized target, was not impacted by target validity predictability. However, a longer latency, more anterior, lateralized effect—here, termed the “contralateral attention-related negativity”—was reduced under predictable conditions. This reduction interacted with validity, with substantially greater reduction for invalid than valid trials. These data suggest cognitive control over attentional capture by WM content does not affect the initial attentional-orienting process but can reduce the need to marshal later control mechanisms for processing relevant items in the visual world.


2020 ◽  
Author(s):  
Anna Lena Biel ◽  
Tamas Minarik ◽  
Paul Sauseng

AbstractVisual perception is influenced by our expectancies about incoming sensory information. It is assumed that mental templates of expected sensory input are created and compared to actual input, which can be matching or not. When such mental templates are held in working memory, cross-frequency phase synchronization (CFS) between theta and gamma band activity has been proposed to serve matching processes between prediction and sensation. We investigated how this is affected by the number of activated templates that could be matched by comparing conditions where participants had to keep either one or multiple templates in mind for successful visual search. We found that memory matching appeared as transient CFS between EEG theta and gamma activity in an early time window around 150ms after search display presentation, in right hemispheric parietal cortex. Our results suggest that for single template conditions, stronger transient theta-gamma CFS at posterior sites contralateral to target presentation can be observed than for multiple templates. This lends evidence to the idea of sequential attentional templates and is understood in line with previous theoretical accounts strongly arguing for transient synchronization between posterior theta and gamma phase as a neuronal correlate of matching incoming sensory information with contents from working memory.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261882
Author(s):  
Tamara S. Satmarean ◽  
Elizabeth Milne ◽  
Richard Rowe

Aggression and trait anger have been linked to attentional biases toward angry faces and attribution of hostile intent in ambiguous social situations. Memory and emotion play a crucial role in social-cognitive models of aggression but their mechanisms of influence are not fully understood. Combining a memory task and a visual search task, this study investigated the guidance of attention allocation toward naturalistic face targets during visual search by visual working memory (WM) templates in 113 participants who self-reported having served a custodial sentence. Searches were faster when angry faces were held in working memory regardless of the emotional valence of the visual search target. Higher aggression and trait anger predicted increased working memory modulated attentional bias. These results are consistent with the Social-Information Processing model, demonstrating that internal representations bias attention allocation to threat and that the bias is linked to aggression and trait anger.


2018 ◽  
Vol 71 (10) ◽  
pp. 2235-2248 ◽  
Author(s):  
Alexandra Trani ◽  
Paul Verhaeghen

We investigated pupil dilation in 96 subjects during task preparation and during a post-trial interval in a visual search task and an auditory working memory task. Completely informative difficulty cues (easy, medium, or hard) were presented right before task preparation to examine whether pupil dilation indicated advance mobilisation of attentional resources; functional magnetic resonance imaging (fMRI) studies have argued for the existence of such task preparation, and the literature shows that pupil dilation tracks attentional effort during task performance. We found, however, little evidence for such task preparation. In the working memory task, pupil size was identical across cues, and although pupil dilation in the visual search task tracked the cue, pupil dilation predicted subsequent performance in neither task. Pupil dilation patterns in the post-trial interval were more consistent with an effect of emotional reactivity. Our findings suggest that the mobilisation of attentional resources in the service of the task does not occur during the preparatory interval, but is delayed until the task itself is initiated.


2018 ◽  
Vol 110 (2) ◽  
pp. 381-399 ◽  
Author(s):  
Efsun Annac ◽  
Xuelian Zang ◽  
Hermann J. Müller ◽  
Thomas Geyer

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Thomas Geyer ◽  
Pardis Rostami ◽  
Lisa Sogerer ◽  
Bernhard Schlagbauer ◽  
Hermann J. Müller

Abstract Visual search is facilitated when observers encounter targets in repeated display arrangements. This ‘contextual-cueing’ (CC) effect is attributed to incidental learning of spatial distractor-target relations. Prior work has typically used only one recognition measure (administered after the search task) to establish whether CC is based on implicit or explicit memory of repeated displays, with the outcome depending on the diagnostic accuracy of the test. The present study compared two explicit memory tests to tackle this issue: yes/no recognition of a given search display as repeated versus generation of the quadrant in which the target (which was replaced by a distractor) had been located during the search task, thus closely matching the processes involved in performing the search. While repeated displays elicited a CC effect in the search task, both tests revealed above-chance knowledge of repeated displays, though explicit-memory accuracy and its correlation with contextual facilitation in the search task were more pronounced for the generation task. These findings argue in favor of a one-system, explicit-memory account of CC. Further, they demonstrate the superiority of the generation task for revealing the explicitness of CC, likely because both the search and the memory task involve overlapping processes (in line with ‘transfer-appropriate processing’).


Author(s):  
Svyatoslav Guznov ◽  
Gerald Matthews ◽  
Joel S. Warm ◽  
Marc Pfahler

Objective: The goal for this study was to evaluate several visual search training techniques in an unmanned aerial vehicle (UAV) simulated task environment. Background: Operators controlling remote unmanned vehicles often must perform complex visual search tasks (e.g., target search). These tasks may pose substantial demands on the operator due to various environmental factors. Visual search training may reduce errors and mitigate stress, but the most effective form of training has not been determined. Methods: Participants were assigned to one of four training conditions: target, cue, visual scanning, or control. After the training, the effectiveness of the training techniques was tested during a 30-minute simulated UAV flight. A secondary task manipulation was included to further simulate the demands of a realistic UAV control and target search task. Subjective stress and fatigue were also assessed. Results: Target training produced superior target search performances in more hits and fewer false alarms (FAs) when compared to the control condition. The visual scanning and cue trainings were moderately effective. Only target training performance was vulnerable to the secondary task load. The task was stressful, but training did not mitigate stress response. Conclusion: Training participants on the target and the cue appearance as well as active scanning of the visual field is promising for promoting effective target search for this simulated UAV environment. Application: These training techniques could be used in preparation for intelligence, surveillance, and reconnaissance (ISR) missions that involve target search, especially where target appearance change is likely.


2019 ◽  
Vol 12 (3) ◽  
pp. 119-134
Author(s):  
K.S. Kozlov ◽  
E.S. Gorbunova

Subsequent search misses can occur during visual search for several targets. SSM is a decrease in accuracy at finding a second target after successful detection of a first one. Two experiments investigated the effect of object working memory load, target stimuli similarity and the similarity of stimuli in visual search task and working memory tasks on the SSM. It was found that targets perceptual similarity is significant, as well as memory load in case of working memory task and visual search task stimuli similarity. In addition, we found a significant interaction between working memory load and number of shared features between two target stimuli, which may indicate a common mechanism underlying the role of working memory load and perceptual similarity factors.


Author(s):  
Angela A. Manginelli ◽  
Franziska Geringswald ◽  
Stefan Pollmann

When distractor configurations are repeated over time, visual search becomes more efficient, even if participants are unaware of the repetition. This contextual cueing is a form of incidental, implicit learning. One might therefore expect that contextual cueing does not (or only minimally) rely on working memory resources. This, however, is debated in the literature. We investigated contextual cueing under either a visuospatial or a nonspatial (color) visual working memory load. We found that contextual cueing was disrupted by the concurrent visuospatial, but not by the color working memory load. A control experiment ruled out that unspecific attentional factors of the dual-task situation disrupted contextual cueing. Visuospatial working memory may be needed to match current display items with long-term memory traces of previously learned displays.


Sign in / Sign up

Export Citation Format

Share Document