scholarly journals Hemispheric Asymmetry in Visual Processing: An ERP Study on Spatial Frequency Gratings

Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 180
Author(s):  
Alice Mado Proverbio ◽  
Alberto Zani

A hemispheric asymmetry for the processing of global versus local visual information is known. In this study, we investigated the existence of a hemispheric asymmetry for the visual processing of low versus high spatial frequency gratings. The event-related potentials were recorded in a group of healthy right-handed volunteers from 30 scalp sites. Six types of stimuli (1.5, 3 and 6 c/deg gratings) were randomly flashed 180 times in the left and right upper hemifields. The stimulus duration was 80 ms, and the interstimulus interval (ISI) ranged between 850 and 1000 ms. Participants paid attention and responded to targets based on their spatial frequency and location. The C1 and P1 visual responses, as well as a later selection negativity and a P300 component of event-related potentials (ERPs), were quantified and subjected to repeated-measure analyses of variance (ANOVAs). Overall, the performance was faster for the right visual field (RVF), thus suggesting a left hemispheric advantage for the attentional selection of local elements. Similarly, the analysis of the mean area amplitude of the C1 (60–110 ms) sensory response showed a stronger attentional effect (F+L+ vs. F−L+) at the left occipital areas, thus suggesting the sensory nature of this hemispheric asymmetry.

Author(s):  
Alice Mado Proverbio ◽  
and Alberto Zani

A hemispheric asymmetry is known for the processing of global vs. local visual information. In this study, we investigated the existence of a hemispheric asymmetry for visual processing of low vs. high spatial frequency gratings. Event-related potentials were recorded in a group of healthy right-handed volunteers from 30 scalp sites. Six types of stimuli (1.5, 3 and 6 c/deg gratings) were randomly flashed 180 times in the left and right upper hemi-fields. Stimulus duration was 80 ms and ISI ranged between 850-1000 ms. Participants had to pay attention and respond to targets based on their spatial frequency and location, or to passively look at the stimuli. C1 and P1 visual responses, as well as a later Selection negativity and a P300 components of ERPs were quantified and subjected to repeated-measure ANOVAs. Overall, performance was faster for the RVF, thus suggesting a left hemispheric advantage for attentional selection of local elements. Similarly, the analysis of mean area amplitude of C1 (60-110 ms) sensory response showed a stronger attentional effect (F+L+ vs. F-L+) at left occipital areas, thus suggesting the sensory nature of this hemispheric asymmetry.


2015 ◽  
Vol 45 (10) ◽  
pp. 2111-2122 ◽  
Author(s):  
W. Li ◽  
T. M. Lai ◽  
C. Bohon ◽  
S. K. Loo ◽  
D. McCurdy ◽  
...  

BackgroundAnorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities – event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI) – to test for abnormal activity associated with early visual signaling.MethodWe acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems.ResultsAN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces.ConclusionsResults provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.


Author(s):  
Shozo Tobimatsu

There are two major parallel pathways in humans: the parvocellular (P) and magnocellular (M) pathways. The former has excellent spatial resolution with color selectivity, while the latter shows excellent temporal resolution with high contrast sensitivity. Visual stimuli should be tailored to answer specific clinical and/or research questions. This chapter examines the neural mechanisms of face perception using event-related potentials (ERPs). Face stimuli of different spatial frequencies were used to investigate how low-spatial-frequency (LSF) and high-spatial-frequency (HSF) components of the face contribute to the identification and recognition of the face and facial expressions. The P100 component in the occipital area (Oz), the N170 in the posterior temporal region (T5/T6) and late components peaking at 270-390 ms (T5/T6) were analyzed. LSF enhanced P100, while N170 was augmented by HSF irrespective of facial expressions. This suggested that LSF is important for global processing of facial expressions, whereas HSF handles featural processing. There were significant amplitude differences between positive and negative LSF facial expressions in the early time windows of 270-310 ms. Subsequently, the amplitudes among negative HSF facial expressions differed significantly in the later time windows of 330–390 ms. Discrimination between positive and negative facial expressions precedes discrimination among different negative expressions in a sequential manner based on parallel visual channels. Interestingly, patients with schizophrenia showed decreased spatial frequency sensitivities for face processing. Taken together, the spatially filtered face images are useful for exploring face perception and recognition.


2018 ◽  
Author(s):  
Tamar I. Regev ◽  
Jonathan Winawer ◽  
Edden M. Gerber ◽  
Robert T. Knight ◽  
Leon Y. Deouell

AbstractMuch of what is known about the timing of visual processing in the brain is inferred from intracranial studies in monkeys, with human data limited to mainly non-invasive methods with lower spatial resolution. Here, we estimated visual onset latencies from electrocorticographic (ECoG) recordings in a patient who was implanted with 112 sub-dural electrodes, distributed across the posterior cortex of the right hemisphere, for pre-surgical evaluation of intractable epilepsy. Functional MRI prior to surgery was used to determine boundaries of visual areas. The patient was presented with images of objects from several categories. Event Related Potentials (ERPs) were calculated across all categories excluding targets, and statistically reliable onset latencies were determined using a bootstrapping procedure over the single trial baseline activity in individual electrodes. The distribution of onset latencies broadly reflected the known hierarchy of visual areas, with the earliest cortical responses in primary visual cortex, and higher areas showing later responses. A clear exception to this pattern was robust, statistically reliable and spatially localized, very early responses on the bank of the posterior intra-parietal sulcus (IPS). The response in the IPS started nearly simultaneously with responses detected in peristriate visual areas, around 60 milliseconds post-stimulus onset. Our results support the notion of early visual processing in the posterior parietal lobe, not respecting traditional hierarchies, and give direct evidence for the upper limit of onset times of visual responses across the human cortex.


2021 ◽  
Author(s):  
Wei Dou ◽  
Audrey Morrow ◽  
Luca Iemi ◽  
Jason Samaha

The neurogenesis of alpha-band (8-13 Hz) activity has been characterized across many different animal experiments. However, the functional role that alpha oscillations play in perception and behavior has largely been attributed to two contrasting hypotheses, with human evidence in favor of either (or both or neither) remaining sparse. On the one hand, alpha generators have been observed in relay sectors of the visual thalamus and are postulated to phasically inhibit afferent visual input in a feedforward manner 1-4. On the other hand, evidence also suggests that the direction of influence of alpha activity propagates backwards along the visual hierarchy, reflecting a feedback influence upon the visual cortex 5-9. The primary source of human evidence regarding the role of alpha phase in visual processing has been on perceptual reports 10-16, which could be modulated either by feedforward or feedback alpha activity. Thus, although these two hypotheses are not mutually exclusive, human evidence clearly supporting either one is lacking. Here, we present human subjects with large, high-contrast visual stimuli that elicit robust C1 event-related potentials (ERP), which peak between 70-80 milliseconds post-stimulus and are thought to reflect afferent primary visual cortex (V1) input 17-20. We find that the phase of ongoing alpha oscillations modulates the global field power (GFP) of the EEG during this first volley of stimulus processing (the C1 time-window). On the standard assumption 21-23 that this early activity reflects postsynaptic potentials being relayed to visual cortex from the thalamus, our results suggest that alpha phase gates visual responses during the first feed-forward sweep of processing.


2009 ◽  
Vol 109 (1) ◽  
pp. 140-158 ◽  
Author(s):  
Alberto Zani ◽  
Alice Mado Proverbio

Event-related potentials (ERPs) were recorded from occipital sites to investigate early selection mechanisms and to determine the time at which attention modifies the processing activity of the visual cortex in humans. 19 right-handed participants served as paid volunteers. The task consisted in paying selective attention to a combination of spatial frequency and location and then responding to target stimuli while ignoring other combinations of features. Sensory-evoked components were analyzed by measuring mean amplitude values within the latency ranges of 60–80, 80–100, 100–120, and 120–140 msec, poststimulus. Stimuli relevant in frequency and/or location elicited larger evoked CI responses than unattended stimuli as early as 60–80 msec, poststimulus, a range that likely corresponds to sensory activity in the striate cortex, although due to the small number of recording sites, the activity could not be precisely localized.


2012 ◽  
Vol 25 (0) ◽  
pp. 40
Author(s):  
Alexis Pérez-Bellido ◽  
Joan López-Moliner ◽  
Salvador Soto-Faraco

Prior knowledge about the spatial frequency (SF) of upcoming visual targets (Gabor patches) speeds up average reaction times and decreases standard deviation. This has often been regarded as evidence for a multichannel processing of SF in vision. Multisensory research, on the other hand, has often reported the existence of sensory interactions between auditory and visual signals. These interactions result in enhancements in visual processing, leading to lower sensory thresholds and/or more precise visual estimates. However, little is known about how multisensory interactions may affect the uncertainty regarding visual SF. We conducted a reaction time study in which we manipulated the uncertanty about SF (SF was blocked or interleaved across trials) of visual targets, and compared visual only versus audio–visual presentations. Surprisingly, the analysis of the reaction times and their standard deviation revealed an impairment of the selective monitoring of the SF channel by the presence of a concurrent sound. Moreover, this impairment was especially pronounced when the relevant channels were high SFs at high visual contrasts. We propose that an accessory sound automatically favours visual processing of low SFs through the magnocellular channels, thereby detracting from the potential benefits from tuning into high SF psychophysical-channels.


2020 ◽  
Vol 14 ◽  
Author(s):  
Luiza Kirasirova ◽  
Vladimir Bulanov ◽  
Alexei Ossadtchi ◽  
Alexander Kolsanov ◽  
Vasily Pyatin ◽  
...  

A P300 brain-computer interface (BCI) is a paradigm, where text characters are decoded from event-related potentials (ERPs). In a popular implementation, called P300 speller, a subject looks at a display where characters are flashing and selects one character by attending to it. The selection is recognized as the item with the strongest ERP. The speller performs well when cortical responses to target and non-target stimuli are sufficiently different. Although many strategies have been proposed for improving the BCI spelling, a relatively simple one received insufficient attention in the literature: reduction of the visual field to diminish the contribution from non-target stimuli. Previously, this idea was implemented in a single-stimulus switch that issued an urgent command like stopping a robot. To tackle this approach further, we ran a pilot experiment where ten subjects operated a traditional P300 speller or wore a binocular aperture that confined their sight to the central visual field. As intended, visual field restriction resulted in a replacement of non-target ERPs with EEG rhythms asynchronous to stimulus periodicity. Changes in target ERPs were found in half of the subjects and were individually variable. While classification accuracy was slightly better for the aperture condition (84.3 ± 2.9%, mean ± standard error) than the no-aperture condition (81.0 ± 2.6%), this difference was not statistically significant for the entire sample of subjects (N = 10). For both the aperture and no-aperture conditions, classification accuracy improved over 4 days of training, more so for the aperture condition (from 72.0 ± 6.3% to 87.0 ± 3.9% and from 72.0 ± 5.6% to 97.0 ± 2.2% for the no-aperture and aperture conditions, respectively). Although in this study BCI performance was not substantially altered, we suggest that with further refinement this approach could speed up BCI operations and reduce user fatigue. Additionally, instead of wearing an aperture, non-targets could be removed algorithmically or with a hybrid interface that utilizes an eye tracker. We further discuss how a P300 speller could be improved by taking advantage of the different physiological properties of the central and peripheral vision. Finally, we suggest that the proposed experimental approach could be used in basic research on the mechanisms of visual processing.


2020 ◽  
Vol 25 (5) ◽  
pp. 237-248
Author(s):  
Maojin Liang ◽  
Jiahao Liu ◽  
Yuexin Cai ◽  
Fei Zhao ◽  
Suijun Chen ◽  
...  

Objective: The present study investigated the characteristics of visual processing in the auditory-associated cortex in adults with hearing loss using event-related potentials. Methods: Ten subjects with bilateral postlingual hearing loss were recruited. Ten age- and sex-matched normal-hearing subjects were included as controls. Visual (“sound” and “non-sound” photos)-evoked potentials were performed. The P170 response in the occipital area as well as N1 and N2 responses in FC3 and FC4 were analyzed. Results: Adults with hearing loss had higher P170 amplitudes, significantly higher N2 amplitudes, and shorter N2 latency in response to “sound” and “non-sound” photo stimuli at both FC3 and FC4, with the exception of the N2 amplitude which responded to “sound” photo stimuli at FC3. Further topographic mapping analysis revealed that patients had a large difference in response to “sound” and “non-sound” photos in the right frontotemporal area, starting from approximately 200 to 400 ms. Localization of source showed the difference to be located in the middle frontal gyrus region (BA10) at around 266 ms. Conclusions: The significantly stronger responses to visual stimuli indicate enhanced visual processing in the auditory-associated cortex in adults with hearing loss, which may be attributed to cortical visual reorganization involving the right frontotemporal cortex.


Sign in / Sign up

Export Citation Format

Share Document