scholarly journals Top-down predictions of visual features dynamically reverse their bottom-up processing in the occipito-ventral pathway to facilitate stimulus disambiguation and behavior

2021 ◽  
Author(s):  
Yuening Yan ◽  
Jiayu Zhan ◽  
Robin A. A. Ince ◽  
Philippe G. Schyns

The prevalent conception of vision-for-categorization suggests an interplay of two dynamic flows of information within the occipito-ventral pathway. The bottom-up flow progressively reduces the high-dimensional input into a lower-dimensional representation that is compared with memory to produce categorization behavior. The top-down flow predicts category information (i.e. features) from memory that propagates down the same hierarchy to facilitate input processing and behavior. However, the neural mechanisms that support such dynamic feature propagation up and down the visual hierarchy and how they facilitate behavior remain unclear. Here, we studied them using a prediction experiment that cued participants (N = 11) to the spatial location (left vs. right) and spatial frequency (SF, Low, LSF, vs. High, HSF) contents of an upcoming Gabor patch. Using concurrent MEG recordings of each participant's neural activity, we compared the top-down flow of representation of the predicted Gabor contents (i.e. left vs. right; LSF vs. HSF) to their bottom-up flow. We show (1) that top-down prediction improves speed of categorization in all participants, (2) the top-down flow of prediction reverses the bottom-up representation of the Gabor stimuli, going from deep right fusiform gyrus sources down to occipital cortex sources contra-lateral to the expected Gabor location and (3) that predicted Gabors are better represented when the stimulus is eventually shown, leading to faster categorizations. Our results therefore trace the dynamic top-down flow of a predicted visual content that chronologically and hierarchically reversed bottom-up processing, further facilitates visual representations in early visual cortex and subsequent categorization behavior.

Author(s):  
Jochem van Kempen ◽  
Marc A. Gieselmann ◽  
Michael Boyd ◽  
Nicholas A. Steinmetz ◽  
Tirin Moore ◽  
...  

AbstractSpontaneous fluctuations in cortical excitability influence sensory processing and behavior. These fluctuations, long known to reflect global changes in cortical state, were recently found to be modulated locally within a retinotopic map during spatially selective attention. We found that periods of vigorous (On) and faint (Off) spiking activity, the signature of cortical state fluctuations, were coordinated across brain areas along the visual hierarchy and tightly coupled to their retinotopic alignment. During top-down attention, this interareal coordination was enhanced and progressed along the reverse cortical hierarchy. The extent of local state coordination between areas was predictive of behavioral performance. Our results show that cortical state dynamics are shared across brain regions, modulated by cognitive demands and relevant for behavior.One Sentence SummaryInterareal coordination of local cortical state is retinotopically precise and progresses in a reverse hierarchical manner during selective attention.


2004 ◽  
Vol 27 (6) ◽  
pp. 803-804
Author(s):  
Martin Sarter ◽  
Gary G. Berntson

Behrendt & Young's (B&Y's) theory offers a potentially important perspective on the neurobiology of schizophrenia, but it remains incomplete. In addition to bottom-up contributions, such as those associated with disturbances in sensory constraints on cognitive processes, a comprehensive model requires the integration of the consequences of abnormal top-down modulation of input processing for the evolution of “underconstrained” perceptions. Dysfunctional cholinergic modulation of input functions represents a necessary mechanism for the generation of false perceptions.


2017 ◽  
Vol 26 (4) ◽  
pp. 352-358 ◽  
Author(s):  
Jill Talley Shelton ◽  
Michael K. Scullin

Like many dual process theories in the psychological sciences, existing models of prospective memory (i.e., remembering to execute future intentions) emphasize the role of singular top-down or bottom-up processes that act in isolation. We argue that top-down and bottom-up processes are interconnected and dynamically interact to support prospective memory. We elaborate on this dynamic multiprocess framework by focusing on recent behavioral, neuroimaging, and eye-tracking research that demonstrated the dynamic nature of monitoring (top-down) and spontaneous retrieval (bottom-up) processes in relation to contextual factors, metacognition, and individual differences. We conclude that identifying how dual processes interact with environmental and individual difference factors is crucial for advancing understanding of cognition and behavior.


2016 ◽  
Author(s):  
Noam Gordon ◽  
Roger Koenig-Robert ◽  
Naotsugu Tsuchiya ◽  
Jeroen van Boxtel ◽  
Jakob Hohwy

AbstractUnderstanding the integration of top-down and bottom-up signals is essential for the study of perception. Current accounts of predictive coding describe this in terms of interactions between state units encoding expectations or predictions, and error units encoding prediction error. However, direct neural evidence for such interactions has not been well established. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy: Steady State Visual Evoked Potential (SSVEP at 10Hz, tracking bottom-up signals) and Semantic Wavelet-Induced Frequency Tagging (SWIFT at 1.3Hz tracking top-down signals). Importantly, we examined intermodulation components (IM, e.g., 11.3Hz) as a measure of integration between these signals. To examine the influence of expectation and predictions on the nature of such integration, we constructed 50-second movie streams and modulated expectation levels for upcoming stimuli by varying the proportion of images presented across trials. We found SWIFT, SSVEP and IM signals to differ in important ways. SSVEP was strongest over occipital electrodes and was not modified by certainty. Conversely, SWIFT signals were evident over temporo- and parieto-occipital areas and decreased as a function of increasing certainty levels. Finally, IMs were evident over occipital electrodes and increased as a function of certainty. These results link SSVEP, SWIFT and IM signals to sensory evidence, predictions, prediction errors and hypothesis-testing - the core elements of predictive coding. These findings provide neural evidence for the integration of top-down and bottom-up information in perception, opening new avenues to studying such interactions in perception while constraining neuronal models of predictive coding.SIGNIFICANCE STATEMENTThere is a growing understanding that both top-down and bottom-up signals underlie perception. But how do these signals interact? And how does this process depend on the signals’ probabilistic properties? ‘Predictive coding’ theories of perception describe this in terms how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories in particular. The novel Hierarchical Frequency Tagging method simultaneously tags top-down and bottom-up signals in EEG recordings, while obtaining a measure for the level of integration between these signals. Our results suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input.


2014 ◽  
Vol 26 (11) ◽  
pp. 2503-2513 ◽  
Author(s):  
André Klapper ◽  
Richard Ramsey ◽  
Daniël Wigboldus ◽  
Emily S. Cross

Humans automatically imitate other people's actions during social interactions, building rapport and social closeness in the process. Although the behavioral consequences and neural correlates of imitation have been studied extensively, little is known about the neural mechanisms that control imitative tendencies. For example, the degree to which an agent is perceived as human-like influences automatic imitation, but it is not known how perception of animacy influences brain circuits that control imitation. In the current fMRI study, we examined how the perception and belief of animacy influence the control of automatic imitation. Using an imitation–inhibition paradigm that involves suppressing the tendency to imitate an observed action, we manipulated both bottom–up (visual input) and top–down (belief) cues to animacy. Results show divergent patterns of behavioral and neural responses. Behavioral analyses show that automatic imitation is equivalent when one or both cues to animacy are present but reduces when both are absent. By contrast, right TPJ showed sensitivity to the presence of both animacy cues. Thus, we demonstrate that right TPJ is biologically tuned to control imitative tendencies when the observed agent both looks like and is believed to be human. The results suggest that right TPJ may be involved in a specialized capacity to control automatic imitation of human agents, rather than a universal process of conflict management, which would be more consistent with generalist theories of imitative control. Evidence for specialized neural circuitry that “controls” imitation offers new insight into developmental disorders that involve atypical processing of social information, such as autism spectrum disorders.


2018 ◽  
Author(s):  
Noam Gordon ◽  
Naotsugu Tsuchiya ◽  
Roger Koenig-Robert ◽  
Jakob Hohwy

AbstractPerception results from the integration of incoming sensory information with pre-existing information available in the brain. In this EEG (electroencephalography) study we utilised the Hierarchical Frequency Tagging method to examine how such integration is modulated by expectation and attention. Using intermodulation (IM) components as a measure of non-linear signal integration, we show in three different experiments that both expectation and attention enhance integration between top-down and bottom-up signals. Based on multispectral phase coherence, we present two direct physiological measures to demonstrate the distinct yet related mechanisms of expectation and attention. Specifically, our results link expectation to the modulation of prediction signals and the integration of top-down and bottom-up information at lower levels of the visual hierarchy. Meanwhile, they link attention to the propagation of ascending signals and the integration of information at higher levels of the visual hierarchy. These results are consistent with the predictive coding account of perception.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Noam Gordon ◽  
Roger Koenig-Robert ◽  
Naotsugu Tsuchiya ◽  
Jeroen JA van Boxtel ◽  
Jakob Hohwy

There is a growing understanding that both top-down and bottom-up signals underlie perception. But it is not known how these signals integrate with each other and how this depends on the perceived stimuli’s predictability. ‘Predictive coding’ theories describe this integration in terms of how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy. Importantly, we examined intermodulation components as a measure of integration between these signals. Our results link the different signals to core aspects of predictive coding, and suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input, providing evidence for predictive coding and opening new avenues to studying such interactions in perception.


Author(s):  
Sunyoung Park ◽  
John T. Serences

Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared to the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used fMRI and human participants to assess the precision of bottom-up spatial representations evoked by high contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially-specific bottom-up drive. While V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, mid-level areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g. V1 vs V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.


NeuroImage ◽  
2001 ◽  
Vol 13 (6) ◽  
pp. 952
Author(s):  
Idai Uchida ◽  
Masashi Kameyama ◽  
Satoshi Takenaka ◽  
Seiki Konishi ◽  
Yasuhi Miyashita

Sign in / Sign up

Export Citation Format

Share Document