scholarly journals Derivation of visual timing-tuned neural responses from early visual stimulus representations

Author(s):  
Evi Hendrikx ◽  
Jacob Paul ◽  
Martijn van Ackooij ◽  
Nathan van der Stoep ◽  
Ben Harvey

Abstract Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. Tuned neural responses to visual event timing have been found in areas of the association cortices implicated in these processes. Here we ask whether and where the human brain derives these timing-tuned responses from the responses of early visual cortex, which monotonically increase with event duration and frequency. Using 7T fMRI and neural model-based analyses, we find a gradual transition from monotonically increasing to timing-tuned neural responses beginning in area MT/V5. Therefore, successive stages of visual processing gradually derive timing-tuned response components from the inherent modulation of sensory responses by event timing. This additional timing-tuned response component was independent of retinotopic location. We propose that this hierarchical derivation of timing-tuned responses from sensory processing areas quantifies sensory event timing while abstracting temporal representations from the spatial properties of their inputs.

2018 ◽  
Author(s):  
Liyu Cao ◽  
Barbara Händel

AbstractCognitive processes are almost exclusively investigated under highly controlled settings while voluntary body movements are suppressed. However, recent animal work suggests differences in sensory processing between movement states by showing drastically changed neural responses in early visual areas between locomotion and stillness. Does locomotion also modulate visual cortical activity in humans and what are its perceptual consequences? Here, we present converging neurophysiological and behavioural evidence that walking leads to an increased influence of peripheral stimuli on central visual input. This modulation of visual processing due to walking is encompassed by a change in alpha oscillations, which is suggestive of an attentional shift to the periphery during walking. Overall, our study shows that strategies of sensory information processing can differ between movement states. This finding further demonstrates that a comprehensive understanding of human perception and cognition critically depends on the consideration of natural behaviour.


2018 ◽  
Author(s):  
Michael-Paul Schallmo ◽  
Alex M. Kale ◽  
Scott O. Murray

AbstractWhat we see depends on the spatial context in which it appears. Previous work has linked the reduction of perceived stimulus contrast in the presence of surrounding stimuli to the suppression of neural responses in early visual cortex. It has also been suggested that this surround suppression depends on at least two separable neural mechanisms, one ‘low-level’ and one ‘higher-level,’ which can be differentiated by their response characteristics. In a recent study, we found evidence consistent with these two suppression mechanisms using psychophysical measurements of perceived contrast. Here, we used EEG to demonstrate for the first time that neural responses in the human occipital lobe also show evidence of two separable suppression mechanisms. Eighteen adults (10 female and 8 male) each participated in a total of 3 experimental sessions, in which they viewed visual stimuli through a mirror stereoscope. The first session was used to definitively identify the C1 component, while the second and third comprised the main experiment. ERPs were measured in response to center gratings either with no surround, or with surrounding gratings oriented parallel or orthogonal, and presented either in the same eye (monoptic) or opposite eye (dichoptic). We found that the earliest ERP component (C1; ∼60 ms) was suppressed in the presence of surrounding stimuli, but that this suppression did not depend on surround configuration, suggesting a low-level suppression mechanism which is not tuned for relative orientation. A later response component (N1; ∼160 ms) showed stronger surround suppression for parallel and monoptic stimulus configurations, consistent with our earlier psychophysical results and a higher-level, binocular, orientation-tuned suppression mechanism. We conclude that these two surround suppression mechanisms have distinct response time courses in the human visual system, which can be differentiated using electrophysiology.


2012 ◽  
Vol 24 (1) ◽  
pp. 28-38 ◽  
Author(s):  
Stephen J. Johnston ◽  
David E. J. Linden ◽  
Kimron L. Shapiro

If two centrally presented visual stimuli occur within approximately half a second of each other, the second target often fails to be reported correctly. This effect, called the attentional blink (AB; Raymond, J. E., Shapiro, K. L., & Arnell, K. M. Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology, Human Perception and Performance, 18, 849–860, 1992], has been attributed to a resource “bottleneck,” likely arising as a failure of attention during encoding into or retrieval from visual working memory (WM). Here we present participants with a hybrid WM–AB study while they undergo fMRI to provide insight into the neural underpinnings of this bottleneck. Consistent with a WM-based bottleneck account, fronto-parietal brain areas exhibited a WM load-dependent modulation of neural responses during the AB task. These results are consistent with the view that WM and attention share a capacity-limited resource and provide insight into the neural structures that underlie resource allocation in tasks requiring joint use of WM and attention.


2021 ◽  
pp. 1-12
Author(s):  
Joonkoo Park ◽  
Sonia Godbole ◽  
Marty G. Woldorff ◽  
Elizabeth M. Brannon

Abstract Whether and how the brain encodes discrete numerical magnitude differently from continuous nonnumerical magnitude is hotly debated. In a previous set of studies, we orthogonally varied numerical (numerosity) and nonnumerical (size and spacing) dimensions of dot arrays and demonstrated a strong modulation of early visual evoked potentials (VEPs) by numerosity and not by nonnumerical dimensions. Although very little is known about the brain's response to systematic changes in continuous dimensions of a dot array, some authors intuit that the visual processing stream must be more sensitive to continuous magnitude information than to numerosity. To address this possibility, we measured VEPs of participants viewing dot arrays that changed exclusively in one nonnumerical magnitude dimension at a time (size or spacing) while holding numerosity constant and compared this to a condition where numerosity was changed while holding size and spacing constant. We found reliable but small neural sensitivity to exclusive changes in size and spacing; however, changing numerosity elicited a much more robust modulation of the VEPs. Together with previous work, these findings suggest that sensitivity to magnitude dimensions in early visual cortex is context dependent: The brain is moderately sensitive to changes in size and spacing when numerosity is held constant, but sensitivity to these continuous variables diminishes to a negligible level when numerosity is allowed to vary at the same time. Neurophysiological explanations for the encoding and context dependency of numerical and nonnumerical magnitudes are proposed within the framework of neuronal normalization.


2013 ◽  
Vol 25 (4) ◽  
pp. 547-557 ◽  
Author(s):  
Maital Neta ◽  
William M. Kelley ◽  
Paul J. Whalen

Extant research has examined the process of decision making under uncertainty, specifically in situations of ambiguity. However, much of this work has been conducted in the context of semantic and low-level visual processing. An open question is whether ambiguity in social signals (e.g., emotional facial expressions) is processed similarly or whether a unique set of processors come on-line to resolve ambiguity in a social context. Our work has examined ambiguity using surprised facial expressions, as they have predicted both positive and negative outcomes in the past. Specifically, whereas some people tended to interpret surprise as negatively valenced, others tended toward a more positive interpretation. Here, we examined neural responses to social ambiguity using faces (surprise) and nonface emotional scenes (International Affective Picture System). Moreover, we examined whether these effects are specific to ambiguity resolution (i.e., judgments about the ambiguity) or whether similar effects would be demonstrated for incidental judgments (e.g., nonvalence judgments about ambiguously valenced stimuli). We found that a distinct task control (i.e., cingulo-opercular) network was more active when resolving ambiguity. We also found that activity in the ventral amygdala was greater to faces and scenes that were rated explicitly along the dimension of valence, consistent with findings that the ventral amygdala tracks valence. Taken together, there is a complex neural architecture that supports decision making in the presence of ambiguity: (a) a core set of cortical structures engaged for explicit ambiguity processing across stimulus boundaries and (b) other dedicated circuits for biologically relevant learning situations involving faces.


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2018 ◽  
Author(s):  
Andreea Lazar ◽  
Chris Lewis ◽  
Pascal Fries ◽  
Wolf Singer ◽  
Danko Nikolić

SummarySensory exposure alters the response properties of individual neurons in primary sensory cortices. However, it remains unclear how these changes affect stimulus encoding by populations of sensory cells. Here, recording from populations of neurons in cat primary visual cortex, we demonstrate that visual exposure enhances stimulus encoding and discrimination. We find that repeated presentation of brief, high-contrast shapes results in a stereotyped, biphasic population response consisting of a short-latency transient, followed by a late and extended period of reverberatory activity. Visual exposure selectively improves the stimulus specificity of the reverberatory activity, by increasing the magnitude and decreasing the trial-to-trial variability of the neuronal response. Critically, this improved stimulus encoding is distributed across the population and depends on precise temporal coordination. Our findings provide evidence for the existence of an exposure-driven optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.


2016 ◽  
Vol 16 (12) ◽  
pp. 23
Author(s):  
Simona Monaco ◽  
Elisa Pellencin ◽  
Malfatti Giulia ◽  
Turella Luca

2020 ◽  
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

AbstractPerceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


Sign in / Sign up

Export Citation Format

Share Document