scholarly journals Neural sources of letter and Vernier acuity

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Elham Barzegaran ◽  
Anthony M. Norcia

Abstract Visual acuity can be measured in many different ways, including with letters and Vernier offsets. Prior psychophysical work has suggested that the two acuities are strongly linked given that they both depend strongly on retinal eccentricity and both are similarly affected in amblyopia. Here we used high-density EEG recordings to ask whether the underlying neural sources are common as suggested by the psychophysics or distinct. To measure visual acuity for letters, we recorded evoked potentials to 3 Hz alternations between intact and scrambled text comprised of letters of varying size. To measure visual acuity for Vernier offsets, we recorded evoked potentials to 3 Hz alternations between bar gratings with and without a set of Vernier offsets. Both alternation types elicited robust activity at the 3 Hz stimulus frequency that scaled in amplitude with both letter and offset size, starting near threshold. Letter and Vernier offset responses differed in both their scalp topography and temporal dynamics. The earliest evoked responses to letters occurred on lateral occipital visual areas, predominantly over the left hemisphere. Later responses were measured at electrodes over early visual cortex, suggesting that letter structure is first extracted in second-tier extra-striate areas and that responses over early visual areas are due to feedback. Responses to Vernier offsets, by contrast, occurred first at medial occipital electrodes, with responses at later time-points being more broadly distributed—consistent with feedforward pathway mediation. The previously observed commonalities between letter and Vernier acuity may be due to common bottlenecks in early visual cortex but not because the two tasks are subserved by a common network of visual areas.

2014 ◽  
Vol 26 (10) ◽  
pp. 2370-2384 ◽  
Author(s):  
Ramakrishna Chakravarthi ◽  
Thomas A. Carlson ◽  
Julie Chaffin ◽  
Jeremy Turret ◽  
Rufin VanRullen

Objects occupy space. How does the brain represent the spatial location of objects? Retinotopic early visual cortex has precise location information but can only segment simple objects. On the other hand, higher visual areas can resolve complex objects but only have coarse location information. Thus coarse location of complex objects might be represented by either (a) feedback from higher areas to early retinotopic areas or (b) coarse position encoding in higher areas. We tested these alternatives by presenting various kinds of first- (edge-defined) and second-order (texture) objects. We applied multivariate classifiers to the pattern of EEG amplitudes across the scalp at a range of time points to trace the temporal dynamics of coarse location representation. For edge-defined objects, peak classification performance was high and early and thus attributable to the retinotopic layout of early visual cortex. For texture objects, it was low and late. Crucially, despite these differences in peak performance and timing, training a classifier on one object and testing it on others revealed that the topography at peak performance was the same for both first- and second-order objects. That is, the same location information, encoded by early visual areas, was available for both edge-defined and texture objects at different time points. These results indicate that locations of complex objects such as textures, although not represented in the bottom–up sweep, are encoded later by neural patterns resembling the bottom–up ones. We conclude that feedback mechanisms play an important role in coarse location representation of complex objects.


2016 ◽  
Vol 28 (4) ◽  
pp. 643-655 ◽  
Author(s):  
Matthias M. Müller ◽  
Mireille Trautmann ◽  
Christian Keitel

Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.


2019 ◽  
Vol 19 (6) ◽  
pp. 8 ◽  
Author(s):  
Jing Chen ◽  
Meaghan McManus ◽  
Matteo Valsecchi ◽  
Laurence R. Harris ◽  
Karl R. Gegenfurtner

2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Sae Kaneko ◽  
Ichiro Kuriki ◽  
Søren K Andersen

Abstract Colors are represented in the cone-opponent signals, L-M versus S cones, at least up to the level of inputs to the primary visual cortex. We explored the hue selective responses in early cortical visual areas through recordings of steady-state visual evoked potentials (SSVEPs), elicited by a flickering checkerboard whose color smoothly swept around the hue circle defined in a cone-opponent color space. If cone opponency dominates hue representation in the source of SSVEP signals, SSVEP amplitudes as a function of hue should form a profile that is line-symmetric along the cardinal axes of the cone-opponent color space. Observed SSVEP responses were clearly chromatic ones with increased SSVEP amplitudes and reduced response latencies for higher contrast conditions. The overall elliptic amplitude profile was significantly tilted away from the cardinal axes to have the highest amplitudes in the “lime-magenta” direction, indicating that the hue representation in question is not dominated by cone-opponency. The observed SSVEP amplitude hue profile was better described as a summation of a perceptual response and cone-opponent responses with a larger weight to the former. These results indicate that hue representations in the early visual cortex, measured by the SSVEP technique, are possibly related to perceptual color contrast.


2017 ◽  
Author(s):  
Peter J. Kohler ◽  
Benoit R. Cottereau ◽  
Anthony M. Norcia

The borders between objects and their backgrounds create discontinuities in image feature maps that can be used to recover object shape. Here we used functional magnetic resonance imaging (fMRI) to study the sensitivity of visual cortex to two of the most important image segmentation cues: relative motion and relative disparity. Relative motion and disparity cues were isolated using random-dot kinematograms and stereograms, respectively. For motion-defined boundaries, we found a strong retinotopically organized representation of a 2-degree radius motion-defined disk, starting in V1 and extending though V2 and V3. In the surrounding region, we observed phase-inverted activations indicative of suppression, extending out to at least 6 degrees of retinal eccentricity. For relative disparity, figure responses were only robust in V3, while suppression was observed in all early visual areas. When attention was captured at fixation, figure responses persisted while suppression did not, suggesting that suppression is generated by attentional feedback from higher-order visual areas. Outside of the early visual areas, several areas were sensitive to both types of cues, most notably hV4, LO1 and V3B, making them additional candidate areas for motion- and disparity-cue combination. The overall pattern of extra-striate activations is consistent with recent three-stream models of cortical organization.


2020 ◽  
Vol 123 (2) ◽  
pp. 773-785 ◽  
Author(s):  
Sara Aghajari ◽  
Louis N. Vinke ◽  
Sam Ling

Neurons within early visual cortex are selective for basic image statistics, including spatial frequency. However, these neurons are thought to act as band-pass filters, with the window of spatial frequency sensitivity varying across the visual field and across visual areas. Although a handful of previous functional (f)MRI studies have examined human spatial frequency sensitivity using conventional designs and analysis methods, these measurements are time consuming and fail to capture the precision of spatial frequency tuning (bandwidth). In this study, we introduce a model-driven approach to fMRI analyses that allows for fast and efficient estimation of population spatial frequency tuning (pSFT) for individual voxels. Blood oxygen level-dependent (BOLD) responses within early visual cortex were acquired while subjects viewed a series of full-field stimuli that swept through a large range of spatial frequency content. Each stimulus was generated by band-pass filtering white noise with a central frequency that changed periodically between a minimum of 0.5 cycles/degree (cpd) and a maximum of 12 cpd. To estimate the underlying frequency tuning of each voxel, we assumed a log-Gaussian pSFT and optimized the parameters of this function by comparing our model output against the measured BOLD time series. Consistent with previous studies, our results show that an increase in eccentricity within each visual area is accompanied by a drop in the peak spatial frequency of the pSFT. Moreover, we found that pSFT bandwidth depends on eccentricity and is correlated with the pSFT peak; populations with lower peaks possess broader bandwidths in logarithmic scale, whereas in linear scale this relationship is reversed. NEW & NOTEWORTHY Spatial frequency selectivity is a hallmark property of early visuocortical neurons, and mapping these sensitivities gives us crucial insight into the hierarchical organization of information within visual areas. Due to technical obstacles, we lack a comprehensive picture of the properties of this sensitivity in humans. Here, we introduce a new method, coined population spatial frequency tuning mapping, which circumvents the limitations of the conventional neuroimaging methods, yielding a fuller visuocortical map of spatial frequency sensitivity.


2018 ◽  
Vol 30 (9) ◽  
pp. 1281-1297 ◽  
Author(s):  
Alexa Tompary ◽  
Naseem Al-Aidroos ◽  
Nicholas B. Turk-Browne

Top–down attention prioritizes the processing of goal-relevant information throughout visual cortex based on where that information is found in space and what it looks like. Whereas attentional goals often have both spatial and featural components, most research on the neural basis of attention has examined these components separately. Here we investigated how these attentional components are integrated by examining the attentional modulation of functional connectivity between visual areas with different selectivity. Specifically, we used fMRI to measure temporal correlations between spatially selective regions of early visual cortex and category-selective regions in ventral temporal cortex while participants performed a task that benefitted from both spatial and categorical attention. We found that categorical attention modulated the connectivity of category-selective areas, but only with retinotopic areas that coded for the spatially attended location. Similarly, spatial attention modulated the connectivity of retinotopic areas only with the areas coding for the attended category. This pattern of results suggests that attentional modulation of connectivity is driven both by spatial selection and featural biases. Combined with exploratory analyses of frontoparietal areas that track these changes in connectivity among visual areas, this study begins to shed light on how different components of attention are integrated in support of more complex behavioral goals.


2020 ◽  
pp. 1-8
Author(s):  
Anqi Wang ◽  
Lihong Chen ◽  
Yi Jiang

Human early visual cortex has long been suggested to play a crucial role in context-dependent visual size perception through either lateral interaction or feedback projections from higher to lower visual areas. We investigated the causal contribution of early visual cortex to context-dependent visual size perception using the technique of transcranial direct current stimulation and two well-known size illusions (i.e., the Ebbinghaus and Ponzo illusions) and further elucidated the underlying mechanism that mediates the effect of transcranial direct current stimulation over early visual cortex. The results showed that the magnitudes of both size illusions were significantly increased by anodal stimulation relative to sham stimulation but left unaltered by cathodal stimulation. Moreover, the anodal effect persisted even when the central target and surrounding inducers of the Ebbinghaus configuration were presented to different eyes, with the effect lasting no more than 15 min. These findings provide compelling evidence that anodal occipital stimulation enhances the perceived visual size illusions, which is possibly mediated by weakening the suppressive function of the feedback connections from higher to lower visual areas. Moreover, the current study provides further support for the causal role of early visual cortex in the neural processing of context-dependent visual size perception.


2014 ◽  
Vol 112 (5) ◽  
pp. 1217-1227 ◽  
Author(s):  
Anna Byers ◽  
John T. Serences

Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas ( sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions ( enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars.


2020 ◽  
Author(s):  
Ke Bo ◽  
Siyang Yin ◽  
Yuelu Liu ◽  
Zhenhong Hu ◽  
Sreenivasan Meyyapan ◽  
...  

AbstractThe perception of opportunities and threats in complex scenes represents one of the main functions of the human visual system. In the laboratory, its neurophysiological basis is often studied by having observers view pictures varying in affective content. This body of work has consistently shown that viewing emotionally engaging, compared to neutral, pictures (1) heightens blood flow in limbic structures and frontoparietal cortex, as well as in anterior ventral and dorsal visual cortex, and (2) prompts an increase in the late positive event-related potential (LPP), a scalp-recorded and time-sensitive index of engagement within the network of aforementioned neural structures. The role of retinotopic visual cortex in this process has, however, been contentious, with competing theoretical notions predicting the presence versus absence of emotion-specific signals in retinotopic visual areas. The present study used multimodal neuroimaging and machine learning to address this question by examining the large-scale neural representations of affective pictures. Recording EEG and fMRI simultaneously while observers viewed pleasant, unpleasant, and neutral affective pictures, and applying multivariate pattern analysis to single-trial BOLD activities in retinotopic visual cortex, we identified three robust findings: First, unpleasant-versus-neutral decoding accuracy, as well as pleasant-versus-neutral decoding accuracy, were well above chance level in all retinotopic visual areas, including primary visual cortex. Second, the decoding accuracy in ventral visual cortex, but not in early visual cortex or dorsal visual cortex, was significantly correlated with LPP amplitude. Third, effective connectivity from amygdala to ventral visual cortex predicted unpleasant-versus-neutral decoding accuracy, and effective connectivity from ventral frontal cortex to ventral visual cortex predicted pleasant-versus-neutral decoding accuracy. These results suggest that affective pictures evoked valence-specific multivoxel neural representations in retinotopic visual cortex and that these multivoxel representations were influenced by reentry signals from limbic and frontal brain regions.


Sign in / Sign up

Export Citation Format

Share Document