scholarly journals Neural mechanisms underlying expectation-dependent inhibition of distracting information

2020 ◽  
Author(s):  
Dirk van Moorselaar ◽  
Eline Lampers ◽  
Elisa Cordesius ◽  
Heleen A. Slagter

AbstractPredictions based on learned statistical regularities in the visual world have been shown to facilitate attention and goal-directed behavior by sharpening the sensory representation of goal-relevant stimuli in advance. Yet, how the brain learns to ignore predictable goal-irrelevant or distracting information is unclear. Here, we used EEG and a visual search task in which the predictability of a distractor’s location and/or spatial frequency was manipulated to determine how spatial and feature distractor expectations are neurally implemented and reduce distractor interference. We find that expected distractor features could not only be decoded pre-stimulus, but their representation differed from the representation of that same feature when part of the target. Spatial distractor expectations did not induce changes in preparatory neural activity, but a strongly reduced Pd, an ERP index of inhibition. These results demonstrate that neural effects of statistical learning critically depend on the task relevance and dimension (spatial, feature) of predictions.

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Dirk van Moorselaar ◽  
Eline Lampers ◽  
Elisa Cordesius ◽  
Heleen A Slagter

Predictions based on learned statistical regularities in the visual world have been shown to facilitate attention and goal-directed behavior by sharpening the sensory representation of goal-relevant stimuli in advance. Yet, how the brain learns to ignore predictable goal-irrelevant or distracting information is unclear. Here, we used EEG and a visual search task in which the predictability of a distractor’s location and/or spatial frequency was manipulated to determine how spatial and feature distractor expectations are neurally implemented and reduce distractor interference. We find that expected distractor features could not only be decoded pre-stimulus, but their representation differed from the representation of that same feature when part of the target. Spatial distractor expectations did not induce changes in preparatory neural activity, but a strongly reduced Pd, an ERP index of inhibition. These results demonstrate that neural effects of statistical learning critically depend on the task relevance and dimension (spatial, feature) of predictions.


Author(s):  
Changrun Huang ◽  
Ana Vilotijević ◽  
Jan Theeuwes ◽  
Mieke Donk

AbstractIrrelevant salient objects may capture our attention and interfere with visual search. Recently, it was shown that distraction by a salient object is reduced when it is presented more frequently at one location than at other locations. The present study investigates whether this reduced distractor interference is the result of proactive spatial suppression, implemented prior to display onset, or reactive suppression, occurring after attention has been directed to that location. Participants were asked to search for a shape singleton in the presence of an irrelevant salient color singleton which was presented more often at one location (the high-probability location) than at all other locations (the low-probability locations). On some trials, instead of the search task, participants performed a probe task, in which they had to detect the offset of a probe dot. The results of the search task replicated previous findings showing reduced distractor interference in trials in which the salient distractor was presented at the high-probability location as compared with the low-probability locations. The probe task showed that reaction times were longer for probes presented at the high-probability location than at the low-probability locations. These results indicate that through statistical learning the location that is likely to contain a distractor is suppressed proactively (i.e., prior to display onset). It suggests that statistical learning modulates the first feed-forward sweep of information processing by deprioritizing locations that are likely to contain a distractor in the spatial priority map.


2017 ◽  
Author(s):  
Johannes J. Fahrenfort ◽  
Jonathan Van Leeuwen ◽  
Joshua J. Foster ◽  
Edward Awh ◽  
Christian N.L. Olivers

AbstractWorking memory is the function by which we temporarily maintain information to achieve current task goals. Models of working memory typically debate where this information is stored, rather than how it is stored. Here we ask instead what neural mechanisms are involved in storage, and how these mechanisms change as a function of task goals. Participants either had to reproduce the orientation of a memorized bar (continuous recall task), or identify the memorized bar in a search array (visual search task). The sensory input and retention interval were identical in both tasks. Next, we used decoding and forward modeling on multivariate electroencephalogram (EEG) and time-frequency decomposed EEG to investigate which neural signals carry more informational content during the retention interval. In the continuous recall task, working memory content was preferentially carried by induced oscillatory alpha-band power, while in the visual search task it was more strongly carried by the distribution of evoked (consistently elevated and non-oscillatory) EEG activity. To show the independence of these two signals, we were able to remove informational content from one signal without affecting informational content in the other. Finally, we show that the tuning characteristics of both signals change in opposite directions depending on the current task goal. We propose that these signals reflect oscillatory and elevated firing-rate mechanisms that respectively support location-based and object-based maintenance. Together, these data challenge current models of working memory that place storage in particular regions, but rather emphasize the importance of different distributed maintenance signals depending on task goals.Significance statement (120 words)Without realizing, we are constantly moving things in and out of our mind’s eye, an ability also referred to as ‘working memory’. Where did I put my screwdriver? Do we still have milk in the fridge? A central question in working memory research is how the brain maintains this information temporarily. Here we show that different neural mechanisms are involved in working memory depending on what the memory is used for. For example, remembering what a bottle of milk looks like invokes a different neural mechanism from remembering how much milk it contains: the first one primarily involved in being able to find the object, and the other one involving spatial position, such as the milk level in the bottle.


2017 ◽  
Vol 117 (1) ◽  
pp. 348-364 ◽  
Author(s):  
Sumitash Jana ◽  
Atul Gopal ◽  
Aditya Murthy

Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures.NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements.


2000 ◽  
Vol 84 (3) ◽  
pp. 1692-1696 ◽  
Author(s):  
Ryohei P. Hasegawa ◽  
Madoka Matsumoto ◽  
Akichika Mikami

To explore a visual scene, the brain must detect an object of interest and direct the eyes to it. To investigate the brain's mechanism of saccade target selection, we trained monkeys to perform a visual search task with a response delay and recorded neuronal activity in the prefrontal (PF) cortex. Even though the monkey was not allowed to express its choice until after a delay, the response field of a class of PF neurons was able to differentiate between target and distractors from the very beginning of their response (135 ms). Strong responses were obtained only when the target was presented at the field. Neurons responded much less during a nonsearch task in which saccade target was presented alone in this response field. These results suggest that the PF cortex may be involved in the decision-making process and the focal attention for saccade target selection.


2019 ◽  
Author(s):  
Michel Failing ◽  
Tobias Feldmann-Wüstefeld ◽  
Benchi Wang ◽  
Christian Nicolas Leon Olivers ◽  
Jan Theeuwes

We are constantly extracting regularities from the visual environment to optimize attentional orienting. Here we examine the phenomenon that recurrent presentation of distractors in a specific location leads to its attentional suppression. Specifically, we address the question whether suppression is specific to the spatial regularities of distractors or also extends to visual features bearing statistical regularities. To that end, we used a visual search task with two high probability locations, each showing one of two distractor types more often than the other. At these high probability locations, target processing was impaired and attentional capture by either distractor was reduced, consistent with feature-unspecific spatial suppression. However, suppression was more facilitated when the distractor feature was presented at the high probability location that matched its features, suggesting feature-specific suppression. Interestingly, feature-unspecific spatial suppression only spread between locations when distractors varied within a feature dimension (e.g. red and green) but not when they varied across feature dimensions (e.g., red and square). Our findings thus demonstrate a joint influence of implicitly learned spatial and feature regularities on attention and reveal how the visual system can benefit from complex statistical regularities.


2001 ◽  
Vol 24 (4) ◽  
pp. 602-607 ◽  
Author(s):  
Horace Barlow

Statistical regularities of the environment are important for learning, memory, intelligence, inductive inference, and in fact, for any area of cognitive science where an information-processing brain promotes survival by exploiting them. This has been recognised by many of those interested in cognitive function, starting with Helmholtz, Mach, and Pearson, and continuing through Craik, Tolman, Attneave, and Brunswik. In the current era, many of us have begun to show how neural mechanisms exploit the regular statistical properties of natural images. Shepard proposed that the apparent trajectory of an object when seen successively at two positions results from internalising the rules of kinematic geometry, and although kinematic geometry is not statistical in nature, this is clearly a related idea. Here it is argued that Shepard's term, “internalisation,” is insufficient because it is also necessary to derive an advantage from the process. Having mechanisms selectively sensitive to the spatio-temporal patterns of excitation commonly experienced when viewing moving objects would facilitate the detection, interpolation, and extrapolation of such motions, and might explain the twisting motions that are experienced. Although Shepard's explanation in terms of Chasles' rule seems doubtful, his theory and experiments illustrate that local twisting motions are needed for the analysis of moving objects and provoke thoughts about how they might be detected.


Sign in / Sign up

Export Citation Format

Share Document