scholarly journals Disentangling locus of perceptual learning in the visual hierarchy of motion processing

2018 ◽  
Author(s):  
Ruyuan Zhang ◽  
Duje Tadin

ABSTRACTVisual perceptual learning (VPL) can lead to long-lasting perceptual improvements. While the efficacy of VPL is well established, there is still a considerable debate about what mechanisms underlie the effects of VPL. Much of this debate concentrates on where along the visual processing hierarchy behaviorally relevant plasticity takes place. Here, we aimed to tackle this question in context of motion processing, a domain where links between behavior and processing hierarchy are well established. Specifically, we took advantage of an established transition from component-dependent representations at the earliest level to pattern-dependent representations at the middle-level of cortical motion processing. We trained two groups of participants on the same motion direction identification task using either grating or plaid stimuli. A set of pre- and post-training tests was used to determine the degree of learning specificity and generalizability. This approach allowed us to disentangle contributions from both low- and mid-level motion processing, as well as high-level cognitive changes. We observed a complete bi-directional transfer of learning between component and pattern stimuli as long as they shared the same apparent motion direction. This result indicates learning-induced plasticity at intermediate levels of motion processing. Moreover, we found that motion VPL is specific to the trained stimulus direction, speed, size, and contrast, highlighting the pivotal role of basic visual features in VPL, and diminishing the possibility of non-sensory decision-level enhancements. Taken together, our study psychophysically examined a variety of factors mediating motion VPL, and demonstrated that motion VPL most likely alters visual computation in the middle stage of motion processing.

2017 ◽  
Vol 118 (3) ◽  
pp. 1542-1555 ◽  
Author(s):  
Bastian Schledde ◽  
F. Orlando Galashan ◽  
Magdalena Przybyla ◽  
Andreas K. Kreiter ◽  
Detlef Wegener

Nonspatially selective attention is based on the notion that specific features or objects in the visual environment are effectively prioritized in cortical visual processing. Feature-based attention (FBA), in particular, is a well-studied process that dynamically and selectively addresses neurons preferentially processing the attended feature attribute (e.g., leftward motion). In everyday life, however, behavior may require high sensitivity for an entire feature dimension (e.g., motion), but experimental evidence for a feature dimension-specific attentional modulation on a cellular level is lacking. Therefore, we investigated neuronal activity in macaque motion-selective mediotemporal area (MT) in an experimental setting requiring the monkeys to detect either a motion change or a color change. We hypothesized that neural activity in MT is enhanced when the task requires perceptual sensitivity to motion. In line with this, we found that mean firing rates were higher in the motion task and that response variability and latency were lower compared with values in the color task, despite identical visual stimulation. This task-specific, dimension-based modulation of motion processing emerged already in the absence of visual input, was independent of the relation between the attended and stimulating motion direction, and was accompanied by a spatially global reduction of neuronal variability. The results provide single-cell support for the hypothesis of a feature dimension-specific top-down signal emphasizing the processing of an entire feature class. NEW & NOTEWORTHY Cortical processing serving visual perception prioritizes information according to current task requirements. We provide evidence in favor of a dimension-based attentional mechanism addressing all neurons that process visual information in the task-relevant feature domain. Behavioral tasks required monkeys to attend either color or motion, causing modulations of response strength, variability, latency, and baseline activity of motion-selective monkey area MT neurons irrespective of the attended motion direction but specific to the attended feature dimension.


2009 ◽  
Vol 5 (2) ◽  
pp. 270-273 ◽  
Author(s):  
Szonya Durant ◽  
Johannes M Zanker

Illusory position shifts induced by motion suggest that motion processing can interfere with perceived position. This may be because accurate position representation is lost during successive visual processing steps. We found that complex motion patterns, which can only be extracted at a global level by pooling and segmenting local motion signals and integrating over time, can influence perceived position. We used motion-defined Gabor patterns containing motion-defined boundaries, which themselves moved over time. This ‘motion-defined motion’ induced position biases of up to 0.5°, much larger than has been found with luminance-defined motion. The size of the shift correlated with how detectable the motion-defined motion direction was, suggesting that the amount of bias increased with the magnitude of this complex directional signal. However, positional shifts did occur even when participants were not aware of the direction of the motion-defined motion. The size of the perceptual position shift was greatly reduced when the position judgement was made relative to the location of a static luminance-defined square, but not eliminated. These results suggest that motion-induced position shifts are a result of general mechanisms matching dynamic object properties with spatial location.


2019 ◽  
Author(s):  
Michael B. Bone ◽  
Fahad Ahmad ◽  
Bradley R. Buchsbaum

AbstractWhen recalling an experience of the past, many of the component features of the original episode may be, to a greater or lesser extent, reconstructed in the mind’s eye. There is strong evidence that the pattern of neural activity that occurred during an initial perceptual experience is recreated during episodic recall (neural reactivation), and that the degree of reactivation is correlated with the subjective vividness of the memory. However, while we know that reactivation occurs during episodic recall, we have lacked a way of precisely characterizing the contents—in terms of its featural constituents—of a reactivated memory. Here we present a novel approach, feature-specific informational connectivity (FSIC), that leverages hierarchical representations of image stimuli derived from a deep convolutional neural network to decode neural reactivation in fMRI data collected while participants performed an episodic recall task. We show that neural reactivation associated with low-level visual features (e.g. edges), high-level visual features (e.g. facial features), and semantic features (e.g. “terrier”) occur throughout the dorsal and ventral visual streams and extend into the frontal cortex. Moreover, we show that reactivation of both low- and high-level visual features correlate with the vividness of the memory, whereas only reactivation of low-level features correlates with recognition accuracy when the lure and target images are semantically similar. In addition to demonstrating the utility of FSIC for mapping feature-specific reactivation, these findings resolve the relative contributions of low- and high-level features to the vividness of visual memories, clarify the role of the frontal cortex during episodic recall, and challenge a strict interpretation the posterior-to-anterior visual hierarchy.


2020 ◽  
Author(s):  
Zhiyan Wang ◽  
Masako Tamaki ◽  
Kazuhisa Shibata ◽  
Michael S. Worden ◽  
Takashi Yamada ◽  
...  

AbstractWhile numerous studies have shown that visual perceptual learning (VPL) occurs as a result of exposure to a visual feature in a task-irrelevant manner, the underlying neural mechanism is poorly understood. In a previous psychophysical study, subjects were repeatedly exposed to a task-irrelevant global motion display that induced the perception of not only the local motions but also a global motion moving in the direction of the spatiotemporal average of the local motion vectors. As a result, subjects enhanced their sensitivity only to the local moving directions, suggesting that early visual areas (V1/V2) that process local motions are involved in task-irrelevant VPL. However, this hypothesis has never been examined by directly examining the involvement of early visual areas (V1/V2). Here, we employed a decoded neurofeedback technique (DecNef) using functional magnetic resonance imaging. During the DecNef training, subjects were trained to induce the activity patterns in V1/V2 that were similar to those evoked by the actual presentation of the global motion display. The DecNef training was conducted with neither the actual presentation of the display nor the subjects’ awareness of the purpose of the experiment. As a result, subjects increased the sensitivity to the local motion directions but not specifically to the global motion direction. The training effect was strictly confined to V1/V2. Moreover, subjects reported that they neither perceived nor imagined any motion during the DecNef training. These results together suggest that that V1/V2 are sufficient for exposure-based task-irrelevant VPL to occur unconsciously.Significance StatementWhile numerous studies have shown that visual perceptual learning (VPL) occurs as a result of exposure to a visual feature in a task-irrelevant manner, the underlying neural mechanism is poorly understood. Previous psychophysical experiments suggest that early visual areas (V1/V2) are involved in task-irrelevant VPL. However, this hypothesis has never been examined by directly examining the involvement of early visual areas (V1/V2). Here, using decoded fMRI neurofeedback, the activity patterns similar to those evoked by the presentation of a complex motion display were repeatedly induced only in early visual areas. The training sensitized only the local motion directions and not the global motion direction, suggesting that V1/V2 are involved in task-irrelevant VPL.


2019 ◽  
Author(s):  
Koen V. Haak ◽  
Christian F. Beckmann

AbstractWhether and how the balance between plasticity and stability varies across the brain is an important open question. Within a processing hierarchy, it is thought that plasticity is increased at higher levels of cortical processing, but direct quantitative comparisons between low- and high-level plasticity have not been made so far. Here, we addressed this issue for the human cortical visual system. By quantifying plasticity as the complement of the heritability of functional connectivity, we demonstrate a non-monotonic relationship between plasticity and hierarchical level, such that plasticity decreases from early to mid-level cortex, and then increases further of the visual hierarchy. This non-monotonic relationship argues against recent theory that the balance between plasticity and stability is governed by the costs of the “coding-catastrophe”, and can be explained by a concurrent decline of short-term adaptation and rise of long-term plasticity up the visual processing hierarchy.


PLoS ONE ◽  
2020 ◽  
Vol 15 (8) ◽  
pp. e0237912
Author(s):  
Kieu Ngoc Nguyen ◽  
Takeo Watanabe ◽  
George John Andersen

2019 ◽  
Author(s):  
Nadine Dijkstra ◽  
Luca Ambrogioni ◽  
Marcel A.J. van Gerven

After the presentation of a visual stimulus, cortical visual processing cascades from low-level sensory features in primary visual areas to increasingly abstract representations in higher-level areas. It is often hypothesized that the reverse process underpins the human ability to generate mental images. Under this hypothesis, visual information feeds back from high-level areas as abstract representations are used to construct the sensory representation in primary visual cortices. Such reversals of information flow are also hypothesized to play a central role in later stages of perception. According to predictive processing theories, ambiguous sensory information is resolved using abstract representations coming from high-level areas through oscillatory rebounds between different levels of the visual hierarchy. However, despite the elegance of these theoretical models, to this day there is no direct experimental evidence of the reversion of visual information flow during mental imagery and perception. In the first part of this paper, we provide direct evidence in humans for a reverse order of activation of the visual hierarchy during imagery. Specifically, we show that classification machine learning models trained on brain data at different time points during the early feedforward phase of perception are reactivated in reverse order during mental imagery. In the second part of the paper, we report an 11Hz oscillatory pattern of feedforward and reversed visual processing phases during perception. Together, these results are in line with the idea that during perception, the high-level cause of sensory input is inferred through recurrent hypothesis updating, whereas during imagery, this learned forward mapping is reversed to generate sensory signals given abstract representations.


2018 ◽  
Author(s):  
Tal Golan ◽  
Shany Grossman ◽  
Leon Y Deouell ◽  
Rafael Malach

AbstractSpontaneous eye blinks generate frequent potent interruptions to the retinal input and yet go unnoticed. As such, they provide an attractive approach to the study of the neural correlates of visual awareness. Here, we tested the potential role of predictability in generating blink-related effects using fMRI. While participants attentively watched still images of faces and houses, we monitored naturally occurring spontaneous blinks and introduced three kinds of matched visual interruptions: cued voluntary blinks, self-initiated (and hence, predictable) external darkenings, and physically similar but unpredictable external darkenings. These events’ impact was inspected using fMRI across the visual hierarchy. In early visual cortex, both spontaneous and voluntary blinks, as well as predictable and unpredictable external darkenings, led to largely similar positive responses in peripheral representations. In mid- and high-level visual cortex, all predictable conditions (spontaneous blinks, voluntary blinks, and self-initiated external darkenings) were associated with signal decreases. In contrast, unpredictable darkenings were associated with signal increases. These findings suggest that general-purpose prediction-related mechanisms are involved in producing a small but widespread suppression of mid- and high-order visual regions during blinks. Such suppression may down-regulate responses to predictable transients in the human visual hierarchy.


Sign in / Sign up

Export Citation Format

Share Document