scholarly journals Hemispheric Asymmetry in Visual Processing: An Erp Study on Spatial Frequency Gratings

Author(s):  
Alice Mado Proverbio ◽  
and Alberto Zani

A hemispheric asymmetry is known for the processing of global vs. local visual information. In this study, we investigated the existence of a hemispheric asymmetry for visual processing of low vs. high spatial frequency gratings. Event-related potentials were recorded in a group of healthy right-handed volunteers from 30 scalp sites. Six types of stimuli (1.5, 3 and 6 c/deg gratings) were randomly flashed 180 times in the left and right upper hemi-fields. Stimulus duration was 80 ms and ISI ranged between 850-1000 ms. Participants had to pay attention and respond to targets based on their spatial frequency and location, or to passively look at the stimuli. C1 and P1 visual responses, as well as a later Selection negativity and a P300 components of ERPs were quantified and subjected to repeated-measure ANOVAs. Overall, performance was faster for the RVF, thus suggesting a left hemispheric advantage for attentional selection of local elements. Similarly, the analysis of mean area amplitude of C1 (60-110 ms) sensory response showed a stronger attentional effect (F+L+ vs. F-L+) at left occipital areas, thus suggesting the sensory nature of this hemispheric asymmetry.

Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 180
Author(s):  
Alice Mado Proverbio ◽  
Alberto Zani

A hemispheric asymmetry for the processing of global versus local visual information is known. In this study, we investigated the existence of a hemispheric asymmetry for the visual processing of low versus high spatial frequency gratings. The event-related potentials were recorded in a group of healthy right-handed volunteers from 30 scalp sites. Six types of stimuli (1.5, 3 and 6 c/deg gratings) were randomly flashed 180 times in the left and right upper hemifields. The stimulus duration was 80 ms, and the interstimulus interval (ISI) ranged between 850 and 1000 ms. Participants paid attention and responded to targets based on their spatial frequency and location. The C1 and P1 visual responses, as well as a later selection negativity and a P300 component of event-related potentials (ERPs), were quantified and subjected to repeated-measure analyses of variance (ANOVAs). Overall, the performance was faster for the right visual field (RVF), thus suggesting a left hemispheric advantage for the attentional selection of local elements. Similarly, the analysis of the mean area amplitude of the C1 (60–110 ms) sensory response showed a stronger attentional effect (F+L+ vs. F−L+) at the left occipital areas, thus suggesting the sensory nature of this hemispheric asymmetry.


2015 ◽  
Vol 45 (10) ◽  
pp. 2111-2122 ◽  
Author(s):  
W. Li ◽  
T. M. Lai ◽  
C. Bohon ◽  
S. K. Loo ◽  
D. McCurdy ◽  
...  

BackgroundAnorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – to test for abnormal activity associated with early visual signaling.MethodWe acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems.ResultsAN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces.ConclusionsResults provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.


Author(s):  
Shozo Tobimatsu

There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways. The former has excellent spatial resolution with color selectivity, while the latter shows excellent temporal resolution with high contrast sensitivity. Visual stimuli should be tailored to answer specific clinical and/or research questions. This chapter examines the neural mechanisms of face perception using event-related potentials (ERPs). Face stimuli of different spatial frequencies were used to investigate how low-spatial-frequency (LSF) and high-spatial-frequency (HSF) components of the face contribute to the identification and recognition of the face and facial expressions. The P100 component in the occipital area (Oz), the N170 in the posterior temporal region (T5/T6) and late components peaking at 270-390 ms (T5/T6) were analyzed. LSF enhanced P100, while N170 was augmented by HSF irrespective of facial expressions. This suggested that LSF is important for global processing of facial expressions, whereas HSF handles featural processing. There were significant amplitude differences between positive and negative LSF facial expressions in the early time windows of 270-310 ms. Subsequently, the amplitudes among negative HSF facial expressions differed significantly in the later time windows of 330–390 ms. Discrimination between positive and negative facial expressions precedes discrimination among different negative expressions in a sequential manner based on parallel visual channels. Interestingly, patients with schizophrenia showed decreased spatial frequency sensitivities for face processing. Taken together, the spatially filtered face images are useful for exploring face perception and recognition.


2018 ◽  
Author(s):  
Tamar I. Regev ◽  
Jonathan Winawer ◽  
Edden M. Gerber ◽  
Robert T. Knight ◽  
Leon Y. Deouell

AbstractMuch of what is known about the timing of visual processing in the brain is inferred from intracranial studies in monkeys, with human data limited to mainly non-invasive methods with lower spatial resolution. Here, we estimated visual onset latencies from electrocorticographic (ECoG) recordings in a patient who was implanted with 112 sub-dural electrodes, distributed across the posterior cortex of the right hemisphere, for pre-surgical evaluation of intractable epilepsy. Functional MRI prior to surgery was used to determine boundaries of visual areas. The patient was presented with images of objects from several categories. Event Related Potentials (ERPs) were calculated across all categories excluding targets, and statistically reliable onset latencies were determined using a bootstrapping procedure over the single trial baseline activity in individual electrodes. The distribution of onset latencies broadly reflected the known hierarchy of visual areas, with the earliest cortical responses in primary visual cortex, and higher areas showing later responses. A clear exception to this pattern was robust, statistically reliable and spatially localized, very early responses on the bank of the posterior intra-parietal sulcus (IPS). The response in the IPS started nearly simultaneously with responses detected in peristriate visual areas, around 60 milliseconds post-stimulus onset. Our results support the notion of early visual processing in the posterior parietal lobe, not respecting traditional hierarchies, and give direct evidence for the upper limit of onset times of visual responses across the human cortex.


2021 ◽  
Author(s):  
Wei Dou ◽  
Audrey Morrow ◽  
Luca Iemi ◽  
Jason Samaha

The neurogenesis of alpha-band (8-13 Hz) activity has been characterized across many different animal experiments. However, the functional role that alpha oscillations play in perception and behavior has largely been attributed to two contrasting hypotheses, with human evidence in favor of either (or both or neither) remaining sparse. On the one hand, alpha generators have been observed in relay sectors of the visual thalamus and are postulated to phasically inhibit afferent visual input in a feedforward manner 1-4. On the other hand, evidence also suggests that the direction of influence of alpha activity propagates backwards along the visual hierarchy, reflecting a feedback influence upon the visual cortex 5-9. The primary source of human evidence regarding the role of alpha phase in visual processing has been on perceptual reports 10-16, which could be modulated either by feedforward or feedback alpha activity. Thus, although these two hypotheses are not mutually exclusive, human evidence clearly supporting either one is lacking. Here, we present human subjects with large, high-contrast visual stimuli that elicit robust C1 event-related potentials (ERP), which peak between 70-80 milliseconds post-stimulus and are thought to reflect afferent primary visual cortex (V1) input 17-20. We find that the phase of ongoing alpha oscillations modulates the global field power (GFP) of the EEG during this first volley of stimulus processing (the C1 time-window). On the standard assumption 21-23 that this early activity reflects postsynaptic potentials being relayed to visual cortex from the thalamus, our results suggest that alpha phase gates visual responses during the first feed-forward sweep of processing.


2005 ◽  
Vol 17 (8) ◽  
pp. 1341-1352 ◽  
Author(s):  
Joseph B. Hopfinger ◽  
Anthony J. Ries

Recent studies have generated debate regarding whether reflexive attention mechanisms are triggered in a purely automatic stimulus-driven manner. Behavioral studies have found that a nonpredictive “cue” stimulus will speed manual responses to subsequent targets at the same location, but only if that cue is congruent with actively maintained top-down settings for target detection. When a cue is incongruent with top-down settings, response times are unaffected, and this has been taken as evidence that reflexive attention mechanisms were never engaged in those conditions. However, manual response times may mask effects on earlier stages of processing. Here, we used event-related potentials to investigate the interaction of bottom-up sensory-driven mechanisms and top-down control settings at multiple stages of processing in the brain. Our results dissociate sensory-driven mechanisms that automatically bias early stages of visual processing from later mechanisms that are contingent on top-down control. An early enhancement of target processing in the extrastriate visual cortex (i.e., the P1 component) was triggered by the appearance of a unique bright cue, regardless of top-down settings. The enhancement of visual processing was prolonged, however, when the cue was congruent with top-down settings. Later processing in posterior temporal-parietal regions (i.e., the ipsilateral invalid negativity) was triggered automatically when the cue consisted of the abrupt appearance of a single new object. However, in cases where more than a single object appeared during the cue display, this stage of processing was contingent on top-down control. These findings provide evidence that visual information processing is biased at multiple levels in the brain, and the results distinguish automatically triggered sensory-driven mechanisms from those that are contingent on top-down control settings.


2009 ◽  
Vol 109 (1) ◽  
pp. 140-158 ◽  
Author(s):  
Alberto Zani ◽  
Alice Mado Proverbio

Event-related potentials (ERPs) were recorded from occipital sites to investigate early selection mechanisms and to determine the time at which attention modifies the processing activity of the visual cortex in humans. 19 right-handed participants served as paid volunteers. The task consisted in paying selective attention to a combination of spatial frequency and location and then responding to target stimuli while ignoring other combinations of features. Sensory-evoked components were analyzed by measuring mean amplitude values within the latency ranges of 60–80, 80–100, 100–120, and 120–140 msec, poststimulus. Stimuli relevant in frequency and/or location elicited larger evoked CI responses than unattended stimuli as early as 60–80 msec, poststimulus, a range that likely corresponds to sensory activity in the striate cortex, although due to the small number of recording sites, the activity could not be precisely localized.


2010 ◽  
Vol 104 (2) ◽  
pp. 972-983 ◽  
Author(s):  
M. van Elk ◽  
H. T. van Schie ◽  
S.F.W. Neggers ◽  
H. Bekkering

The present study investigated the selection for action hypothesis, according to which a subject's action intention to perform a movement influences the way in which visual information is being processed. Subjects were instructed in separate blocks either to grasp or to point to a three-dimensional target-object and event-related potentials were recorded relative to stimulus onset. It was found that grasping compared with pointing resulted in a stronger N1 component and a subsequent selection negativity, which were localized to the lateral occipital complex. These effects suggest that the intention to grasp influences the processing of action-relevant features in ventral stream areas already at an early stage (e.g., enhanced processing of object orientation for grasping). These findings provide new insight in the neural and temporal dynamics underlying perception–action coupling and provide neural evidence for a selection for action principle in early human visual processing.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


2015 ◽  
Vol 27 (3) ◽  
pp. 492-508 ◽  
Author(s):  
Nicholas E. Myers ◽  
Lena Walther ◽  
George Wallis ◽  
Mark G. Stokes ◽  
Anna C. Nobre

Working memory (WM) is strongly influenced by attention. In visual WM tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar frontoparietal control network, the two are likely to exhibit some processing differences, because precues invite anticipation of upcoming information whereas retrocues may guide prioritization, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual WM task designed to permit a direct comparison between cueing conditions. We found marked differences in ERP profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha-band (8–14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that, whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information.


Sign in / Sign up

Export Citation Format

Share Document