scholarly journals Localizing the Neural Substrate of Reflexive Covert Orienting

2012 ◽  
Vol 6 (1) ◽  
Author(s):  
Valerie Higenell ◽  
Brian J. White ◽  
Joshua R. Hwang ◽  
Douglas P. Munoz

The capture of covert spatial attention by salient visual events influences subsequent gaze behavior. A task irrelevant stimulus (cue) can reduce (Attention capture) or prolong (Inhi-bition of return) saccade reaction time to a subsequent target stimulus depending on the cue-target delay. Here we investigated the mechanisms that underlie the sensory-based account of AC/IOR by manipulating the visual processing stage where the cue and target interact. In Experiment 1, liquid crystal shutter goggles were used to test whether AC/IOR occur at a monocular versus binocular processing stage (before versus after signals from both eyes converge). In Experiment 2, we tested whether visual orientation selective mechanisms are critical for AC/IOR by using oriented “Gabor” stimuli. We found that the magnitude of AC and IOR was not different between monocular and interocular viewing conditions, or between iso- and ortho-oriented cue-target interactions. The results suggest that the visual mechanisms that contribute to AC/IOR arise at an orientation-independent binocular processing stage.

1997 ◽  
Vol 8 (2) ◽  
pp. 95-100 ◽  
Author(s):  
Kimron Shapiro ◽  
Jon Driver ◽  
Robert Ward ◽  
Robyn E. Sorensen

When people must detect several targets in a very rapid stream of successive visual events at the same location, detection of an initial target induces misses for subsequent targets within a brief period. This attentional blink may serve to prevent interruption of ongoing target processing by temporarily suppressing vision for subsequent stimuli. We examined the level at which the internal blink operates, specifically, whether it prevents early visual processing or prevents quite substantial processing from reaching awareness. Our data support the latter view. We observed priming from missed letter targets, benefiting detection of a subsequent target with the same identity but a different case. In a second study, we observed semantic priming from word targets that were missed during the blink. These results demonstrate that attentional gating within the blink operates only after substantial stimulus processing has already taken place. The results are discussed in terms of two forms of visual representation, namely, types and tokens.


2021 ◽  
pp. 174702182199003
Author(s):  
Andy J Kim ◽  
David S Lee ◽  
Brian A Anderson

Previously reward-associated stimuli have consistently been shown to involuntarily capture attention in the visual domain. Although previously reward-associated but currently task-irrelevant sounds have also been shown to interfere with visual processing, it remains unclear whether such stimuli can interfere with the processing of task-relevant auditory information. To address this question, we modified a dichotic listening task to measure interference from task-irrelevant but previously reward-associated sounds. In a training phase, participants were simultaneously presented with a spoken letter and number in different auditory streams and learned to associate the correct identification of each of three letters with high, low, and no monetary reward, respectively. In a subsequent test phase, participants were again presented with the same auditory stimuli but were instead instructed to report the number while ignoring spoken letters. In both the training and test phases, response time measures demonstrated that attention was biased in favour of the auditory stimulus associated with high value. Our findings demonstrate that attention can be biased towards learned reward cues in the auditory domain, interfering with goal-directed auditory processing.


2012 ◽  
Vol 24 (10) ◽  
pp. 2043-2056 ◽  
Author(s):  
Ayano Matsushima ◽  
Masaki Tanaka

Resistance to distraction is a key component of executive functions and is strongly linked to the prefrontal cortex. Recent evidence suggests that neural mechanisms exist for selective suppression of task-irrelevant information. However, neuronal signals related to selective suppression have not yet been identified, whereas nonselective surround suppression, which results from attentional enhancement for relevant stimuli, has been well documented. This study examined single neuron activities in the lateral PFC when monkeys covertly tracked one of randomly moving objects. Although many neurons responded to the target, we also found a group of neurons that exhibited a selective response to the distractor that was visually identical to the target. Because most neurons were insensitive to an additional distractor that explicitly differed in color from the target, the brain seemed to monitor the distractor only when necessary to maintain internal object segregation. Our results suggest that the lateral PFC might provide at least two top–down signals during covert object tracking: one for enhancement of visual processing for the target and the other for selective suppression of visual processing for the distractor. These signals might work together to discriminate objects, thereby regulating both the sensitivity and specificity of target choice during covert object tracking.


2019 ◽  
Vol 14 (7) ◽  
pp. 727-735 ◽  
Author(s):  
Annett Schirmer ◽  
Maria Wijaya ◽  
Esther Wu ◽  
Trevor B Penney

Abstract This pre-registered event-related potential study explored how vocal emotions shape visual perception as a function of attention and listener sex. Visual task displays occurred in silence or with a neutral or an angry voice. Voices were task-irrelevant in a single-task block, but had to be categorized by speaker sex in a dual-task block. In the single task, angry voices increased the occipital N2 component relative to neutral voices in women, but not men. In the dual task, angry voices relative to neutral voices increased occipital N1 and N2 components, as well as accuracy, in women and marginally decreased accuracy in men. Thus, in women, vocal anger produced a strong, multifaceted visual enhancement comprising attention-dependent and attention-independent processes, whereas in men, it produced a small, behavior-focused visual processing impairment that was strictly attention-dependent. In sum, these data indicate that attention and listener sex critically modulate whether and how vocal emotions shape visual perception.


2011 ◽  
Vol 11 (11) ◽  
pp. 204-204 ◽  
Author(s):  
K. Wieczorek ◽  
C. Gaspar ◽  
C. Pernet ◽  
G. Rousselet

2019 ◽  
Author(s):  
Buse M. Urgen ◽  
Huseyin Boyaci

AbstractExpectations and prior knowledge strongly affect and even shape our visual perception. Specifically, valid expectations speed up perceptual decisions, and determine what we see in a noisy stimulus. Bayesian models have been remarkably successful to capture the behavioral effects of expectation. On the other hand several more mechanistic neural models have also been put forward, which will be referred as “predictive computation models” here. Both Bayesian and predictive computation models treat perception as a probabilistic inference process, and combine prior information and sensory input. Despite the well-established effects of expectation on recognition or decision-making, its effects on low-level visual processing, and the computational mechanisms underlying those effects remain elusive. Here we investigate how expectations affect early visual processing at the threshold level. Specifically, we measured temporal thresholds (shortest duration of presentation to achieve a certain success level) for detecting the spatial location of an intact image, which could be either a house or a face image. Task-irrelevant cues provided prior information, thus forming an expectation, about the category of the upcoming intact image. The validity of the cue was set to 100, 75 and 50% in different experimental sessions. In a separate session the cue was neutral and provided no information about the category of the upcoming intact image. Our behavioral results showed that valid expectations do not reduce temporal thresholds, rather violation of expectation increases the thresholds specifically when the expectation validity is high. Next, we implemented a recursive Bayesian model, in which the prior is first set using the validity of the specific experimental condition, but in subsequent iterations it is updated using the posterior of the previous iteration. Simulations using the model showed that the observed increase of the temporal thresholds in the unexpected trials is not due to a change in the internal parameters of the system (e.g. decision threshold or internal uncertainty). Rather, further processing is required for a successful detection when the expectation and actual input disagree. These results reveal some surprising behavioral effects of expectation at the threshold level, and show that a simple parsimonious computational model can successfully predict those effects.


2021 ◽  
Author(s):  
◽  
Joshua James Foster

<p>The threat-capture hypothesis posits a threat-detection system that automatically directs visual attention to threat-related stimuli (e.g., angry facial expressions) in the environment. Importantly, this system is theorised to operate preattentively, processing all input across the visual field in parallel, prior to the operation of selective attention. The threat-capture hypothesis generates two predictions. First, because the threat-detection system directs attention to threat automatically, threat stimuli should capture attention when they are task-irrelevant and the observer has no intention to attend to them. Second, because the threat-detection system operates preattentively, threat stimuli should capture attention even when it is engaged elsewhere. This thesis tested these predictions using behavioural measures of attention capture in conjunction with the N2pc, an event-related potential (ERP) index of attention selection. Experiment 1 tested the first prediction of the threat-capture hypothesis – that threat stimuli capture attention when they are task-irrelevant. Participants performed a dot-probe task in which pairs of face cues – one angry and one neutral – preceded a lateral target. On some trials, the faces were Fourier phase-scrambled to control for low-level visual properties. Consistent with the threat-capture hypothesis, an N2pc was observed for angry faces, suggesting they captured attention despite being completely task-irrelevant. Interestingly, this effect remained when faces were Fourier phase-scrambled, suggesting it is low-level visual properties that drive attention capture by angry faces. Experiments 2A and 2B tested the second prediction of the threat capture hypothesis – that threat stimuli capture attention when it is engaged elsewhere. Participants performed a primary task in which they searched a column of letters at fixation for a target letter. The perceptual load of this task was manipulated to ensure that attentional resources were consumed by this task. Thus there were high and low perceptual load conditions in these experiments. Task-irrelevant angry faces interfered with task performance when the perceptual load of the task was high but not when it was low (Experiment 2A). Similarly, angry faces elicited an N2pc, indicating that they captured attention, but only when perceptual load was high and when faces were phase-scrambled (Experiment 2B). These experiments further suggest that low-level visual factors are important in attention capture by angry faces. These results appear to be inconsistent with the threat-capture hypothesis, and suggest that angry faces do not necessarily capture attention when it is engaged elsewhere.</p>


2018 ◽  
Author(s):  
L. Caitlin Elmore ◽  
Ari Rosenberg ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

AbstractCreating three-dimensional (3D) representations of the world from two-dimensional retinal images is fundamental to many visual guided behaviors including reaching and grasping. A critical component of this process is determining the 3D orientation of objects. Previous studies have shown that neurons in the caudal intraparietal area (CIP) of the macaque monkey represent 3D planar surface orientation (i.e., slant and tilt). Here we compare the responses of neurons in areas V3A (which is implicated in 3D visual processing and which precedes CIP in the visual hierarchy) and CIP to 3D oriented planar surfaces. We then examine whether activity in these areas correlates with perception during a fine slant discrimination task in which monkeys report if the top of a surface is slanted towards or away from them. Although we find that V3A and CIP neurons show similar sensitivity to planar surface orientation, significant choice-related activity during the slant discrimination task is rare in V3A but prominent in CIP. These results implicate both V3A and CIP in the representation of 3D surface orientation, and suggest a functional dissociation between the areas based on slant-related decision signals.Significance StatementSurface orientation perception is fundamental to visually guided behaviors such as reaching, grasping, and navigation. Previous studies implicate the caudal intraparietal area (CIP) in the representation of 3D surface orientation. Here we show that responses to 3D oriented planar surfaces are similar in CIP and V3A, which precedes CIP in the cortical hierarchy. However, we also find a qualitative distinction between the two areas: only CIP neurons show robust choice-related activity during a fine visual orientation discrimination task.


Sign in / Sign up

Export Citation Format

Share Document