Spatial Representations as an Emergent Feature of Perceptual Processing

2010 ◽  
Vol 24 (3) ◽  
pp. 161-172 ◽  
Author(s):  
Edmund Wascher ◽  
C. Beste

Spatial selection of relevant information has been proposed to reflect an emergent feature of stimulus processing within an integrated network of perceptual areas. Stimulus-based and intention-based sources of information might converge in a common stage when spatial maps are generated. This approach appears to be inconsistent with the assumption of distinct mechanisms for stimulus-driven and top-down controlled attention. In two experiments, the common ground of stimulus-driven and intention-based attention was tested by means of event-related potentials (ERPs) in the human EEG. In both experiments, the processing of a single transient was compared to the selection of a physically comparable stimulus among distractors. While single transients evoked a spatially sensitive N1, the extraction of relevant information out of a more complex display was reflected in an N2pc. The high similarity of the spatial portion of these two components (Experiment 1), and the replication of this finding for the vertical axis (Experiment 2) indicate that these two ERP components might both reflect the spatial representation of relevant information as derived from the organization of perceptual maps, just at different points in time.

2006 ◽  
Vol 18 (6) ◽  
pp. 880-888 ◽  
Author(s):  
Markus Conci ◽  
Klaus Gramann ◽  
Hermann J. Müller ◽  
Mark A. Elliott

Illusory figure completion demonstrates the ability of the visual system to integrate information across gaps. Mechanisms that underlie figural emergence support the interpolation of contours and the filling-in of form information [Grossberg, S., & Mingolla, E. Neural dynamics of form perception: Boundary completion, illusory figures and neon colour spreading. Psychological Review, 92, 173–211, 1985]. Although both processes contribute to figure formation, visual search for an illusory target configuration has been shown to be susceptible to interfering form, but not contour, information [Conci, M., Müller, H. J., & Elliott, M. A. The contrasting impact of global and local object attributes on Kanizsa figure detection. Submitted]. Here, the physiological basis of form interference was investigated by recording event-related potentials elicited from contour- and surface-based distracter interactions with detection of a target Kanizsa figure. The results replicated the finding of form interference and revealed selection of the target and successful suppression of the irrelevant distracter to be reflected by amplitude differences in the N2pc component (240–340 msec). In conclusion, the observed component variations reflect processes of target selection on the basis of integrated form information resulting from figural completion processes.


2020 ◽  
Vol 11 ◽  
Author(s):  
Maria Richter ◽  
Mariella Paul ◽  
Barbara Höhle ◽  
Isabell Wartenburger

One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.


2011 ◽  
Vol 106 (6) ◽  
pp. 3216-3229 ◽  
Author(s):  
L. Hu ◽  
M. Liang ◽  
A. Mouraux ◽  
R. G. Wise ◽  
Y. Hu ◽  
...  

Across-trial averaging is a widely used approach to enhance the signal-to-noise ratio (SNR) of event-related potentials (ERPs). However, across-trial variability of ERP latency and amplitude may contain physiologically relevant information that is lost by across-trial averaging. Hence, we aimed to develop a novel method that uses 1) wavelet filtering (WF) to enhance the SNR of ERPs and 2) a multiple linear regression with a dispersion term (MLRd) that takes into account shape distortions to estimate the single-trial latency and amplitude of ERP peaks. Using simulated ERP data sets containing different levels of noise, we provide evidence that, compared with other approaches, the proposed WF+MLRd method yields the most accurate estimate of single-trial ERP features. When applied to a real laser-evoked potential data set, the WF+MLRd approach provides reliable estimation of single-trial latency, amplitude, and morphology of ERPs and thereby allows performing meaningful correlations at single-trial level. We obtained three main findings. First, WF significantly enhances the SNR of single-trial ERPs. Second, MLRd effectively captures and measures the variability in the morphology of single-trial ERPs, thus providing an accurate and unbiased estimate of their peak latency and amplitude. Third, intensity of pain perception significantly correlates with the single-trial estimates of N2 and P2 amplitude. These results indicate that WF+MLRd can be used to explore the dynamics between different ERP features, behavioral variables, and other neuroimaging measures of brain activity, thus providing new insights into the functional significance of the different brain processes underlying the brain responses to sensory stimuli.


Perception ◽  
10.1068/p5620 ◽  
2008 ◽  
Vol 37 (1) ◽  
pp. 96-105 ◽  
Author(s):  
M Sharhidd Taliep ◽  
A St Clair Gibson ◽  
J Gray ◽  
L van der Merwe ◽  
C L Vaughan ◽  
...  

2020 ◽  
Author(s):  
Xiangfei Hong ◽  
Ke Bo ◽  
Sreenivasan Meyyapan ◽  
Shanbao Tong ◽  
Mingzhou Ding

AbstractEvent-related potentials (ERPs) are used extensively to investigate the neural mechanisms of attention control and selection. The commonly applied univariate ERP approach, however, has left important questions inadequately answered. Here, we addressed two questions by applying multivariate pattern classification to multichannel ERPs in two spatial-cueing experiments (N = 56 in total): (1) impact of cueing strategies (instructional vs. probabilistic) and (2) neural and behavioral effects of individual differences. Following the cue onset, the decoding accuracy (cue left vs. cue right) began to rise above chance level earlier and remained higher in instructional cueing (∼80 ms) than in probabilistic cueing (∼160 ms), suggesting that unilateral attention focus leads to earlier and more distinct formation of the attentional set. A similar temporal sequence was also found for target-related processing (cued targets vs. uncued targets), suggesting earlier and stronger attention selection under instructional cueing. Across the two experiments, individuals with higher decoding accuracy during ∼460-660 ms post-cue showed higher magnitude of attentional modulation of target-evoked N1 amplitude, suggesting that better formation of anticipatory attentional state leads to better target processing. During target processing, individual difference in decoding accuracy was positively associated with behavioral performance (reaction time), suggesting that stronger selection of task-relevant information leads to better behavioral performance. Taken together, multichannel ERPs combined with machine learning decoding yields new insights into attention control and selection that are not possible with the univariate ERP approach, and along with the univariate ERP approach, provides a more comprehensive methodology to the study of visual spatial attention.


2005 ◽  
Vol 17 (12) ◽  
pp. 1907-1922 ◽  
Author(s):  
Edward K. Vogel ◽  
Geoffrey F. Woodman ◽  
Steven J. Luck

Attention operates at an early stage in some experimental paradigms and at a late stage in others, which suggests that the locus of selection is flexible. The present study was designed to determine whether the locus of selection can vary flexibly within a single experimental paradigm as a function of relatively modest variations in stimulus and task parameters. In the first experiment, a new method for assessing the locus of selection was developed. Specifically, attention can influence perceptual encoding only if it is directed to the target before a perceptual representation of the target has been formed, whereas attention can influence postperceptual processes even if attention is cued after perception is complete. Event-related potentials were used to confirm the validity of this method. The subsequent experiments used cueing tasks in which subjects were required to perceive and remember a set of objects, and the difficulty of the perception and memory components of the task were varied. When the task overloaded perception but not working memory, attention influenced the formation of perceptual representations but not the storage of these representations in memory; when the task overloaded working memory but not perception, attention influenced the transfer of perceptual representations into memory but not the formation of the perceptual representations. Thus, attention operates to select relevant information at whatever stage or stages of processing are overloaded by a particular stimulus-task combination.


2011 ◽  
Vol 23 (1) ◽  
pp. 238-246 ◽  
Author(s):  
Søren K. Andersen ◽  
Sandra Fuchs ◽  
Matthias M. Müller

We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.


2009 ◽  
Vol 21 (6) ◽  
pp. 1127-1134 ◽  
Author(s):  
Jennifer J. Heisz ◽  
Judith M. Shedden

Face processing changes when a face is learned with personally relevant information. In a five-day learning paradigm, faces were presented with rich semantic stories that conveyed personal information about the faces. Event-related potentials were recorded before and after learning during a passive viewing task. When faces were novel, we observed the expected N170 repetition effect—a reduction in amplitude following face repetition. However, when faces were learned with personal information, the N170 repetition effect was eliminated, suggesting that semantic information modulates the N170 repetition effect. To control for the possibility that a simple perceptual effect contributed to the change in the N170 repetition effect, another experiment was conducted using stories that were not related to the person (i.e., stories about rocks and volcanoes). Although viewers were exposed to the faces an equal amount of time, the typical N170 repetition effect was observed, indicating that personal semantic information associated with a face, and not simply perceptual exposure, produced the observed reduction in the N170 repetition effect. These results are the first to reveal a critical perceptual change in face processing as a result of learning person-related information. The results have important implications for researchers studying face processing, as well as learning and memory in general, as they demonstrate that perceptual information alone is not enough to establish familiarity akin to real-world person learning.


2021 ◽  
Author(s):  
Anna Eiserbeck ◽  
Alexander Enge ◽  
Milena Rabovsky ◽  
Rasha Abdel Rahman

Not all visual stimuli processed by the brain reach the level of conscious perception. Previous research has shown that the emotional value of a stimulus is one of the factors that can affect whether it is consciously perceived. Here, we investigated whether social-affective knowledge influences a face’s chance to reach visual consciousness. Furthermore, we took into account the impact of facial appearance. Faces differing in facial trustworthiness (i.e., being perceived as more or less trustworthy based on appearance) were associated with neutral or negative socially relevant information. Subsequently, an attentional blink task was administered to examine whether the manipulated factors affect the faces’ chance to reach visual consciousness under conditions of reduced attentional resources. Participants showed enhanced detection of faces associated with negative as compared to neutral social information. In event-related potentials (ERPs), this was accompanied by effects in the time range of the early posterior negativity (EPN) component. These findings indicate that social-affective person knowledge is processed already before or during attentional selection and can affect which faces are prioritized for access to visual consciousness. In contrast, no clear evidence for an impact of facial trustworthiness during the attentional blink was found. This study was pre-registered using the Open Science Framework (OSF).


2017 ◽  
Author(s):  
Joshua D. Cosman ◽  
Geoffrey F. Woodman ◽  
Jeffrey D. Schall

SummaryAvoiding distraction by salient irrelevant stimuli is critical to accomplishing daily tasks. Regions of prefrontal cortex control attention by enhancing the representation of task-relevant information in sensory cortex, which can be measured directly in modulation of both single neurons and averaging of the scalp-recorded electroencephalogram [1,2]. However, when irrelevant information is particularly conspicuous, it may distract attention and interfere with the selection of behaviorally relevant information. Many studies have shown that that distraction can be minimized via top-down control [3–5], but the cognitive and neural mechanisms giving rise to this control over distraction remain uncertain and vigorously debated [6–8]. Bridging neurophysiology to electrophysiology, we simultaneously recorded neurons in prefrontal cortex and event-related potentials (ERPs) over extrastriate visual cortex to track the processing of salient distractors during a visual search task. Critically, we observed robust suppression of salient distractor representations in both cortical areas, with suppression arising in prefrontal cortex before being manifest in the ERP signal over extrastriate cortex. Furthermore, only prefrontal neurons that participated in selecting the task-relevant target also showed suppression of the task-irrelevant distractor. This suggests a common prefrontal mechanism for target selection and distractor suppression, with input from prefrontal cortex being responsible for both selecting task-relevant and suppressing task-irrelevant information in sensory cortex. Taken together, our results resolve a long-standing debate over the mechanisms that prevent distraction, and provide the first evidence directly linking suppressed neural firing in prefrontal cortex with surface ERP measures of distractor suppression.


Sign in / Sign up

Export Citation Format

Share Document