scholarly journals Expectation and attention increase the integration of top-down and bottom-up signals in perception through different pathways

2018 ◽  
Author(s):  
Noam Gordon ◽  
Naotsugu Tsuchiya ◽  
Roger Koenig-Robert ◽  
Jakob Hohwy

AbstractPerception results from the integration of incoming sensory information with pre-existing information available in the brain. In this EEG (electroencephalography) study we utilised the Hierarchical Frequency Tagging method to examine how such integration is modulated by expectation and attention. Using intermodulation (IM) components as a measure of non-linear signal integration, we show in three different experiments that both expectation and attention enhance integration between top-down and bottom-up signals. Based on multispectral phase coherence, we present two direct physiological measures to demonstrate the distinct yet related mechanisms of expectation and attention. Specifically, our results link expectation to the modulation of prediction signals and the integration of top-down and bottom-up information at lower levels of the visual hierarchy. Meanwhile, they link attention to the propagation of ascending signals and the integration of information at higher levels of the visual hierarchy. These results are consistent with the predictive coding account of perception.

eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Noam Gordon ◽  
Roger Koenig-Robert ◽  
Naotsugu Tsuchiya ◽  
Jeroen JA van Boxtel ◽  
Jakob Hohwy

There is a growing understanding that both top-down and bottom-up signals underlie perception. But it is not known how these signals integrate with each other and how this depends on the perceived stimuli’s predictability. ‘Predictive coding’ theories describe this integration in terms of how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy. Importantly, we examined intermodulation components as a measure of integration between these signals. Our results link the different signals to core aspects of predictive coding, and suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input, providing evidence for predictive coding and opening new avenues to studying such interactions in perception.


Author(s):  
Mariana von Mohr ◽  
Aikaterini Fotopoulou

Pain and pleasant touch have been recently classified as interoceptive modalities. This reclassification lies at the heart of long-standing debates questioning whether these modalities should be defined as sensations on their basis of neurophysiological specificity at the periphery or as homeostatic emotions on the basis of top-down convergence and modulation at the spinal and brain levels. Here, we outline the literature on the peripheral and central neurophysiology of pain and pleasant touch. We next recast this literature within a recent Bayesian predictive coding framework, namely active inference. This recasting puts forward a unifying model of bottom-up and top-down determinants of pain and pleasant touch and the role of social factors in modulating the salience of peripheral signals reaching the brain.


2016 ◽  
Author(s):  
Noam Gordon ◽  
Roger Koenig-Robert ◽  
Naotsugu Tsuchiya ◽  
Jeroen van Boxtel ◽  
Jakob Hohwy

AbstractUnderstanding the integration of top-down and bottom-up signals is essential for the study of perception. Current accounts of predictive coding describe this in terms of interactions between state units encoding expectations or predictions, and error units encoding prediction error. However, direct neural evidence for such interactions has not been well established. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy: Steady State Visual Evoked Potential (SSVEP at 10Hz, tracking bottom-up signals) and Semantic Wavelet-Induced Frequency Tagging (SWIFT at 1.3Hz tracking top-down signals). Importantly, we examined intermodulation components (IM, e.g., 11.3Hz) as a measure of integration between these signals. To examine the influence of expectation and predictions on the nature of such integration, we constructed 50-second movie streams and modulated expectation levels for upcoming stimuli by varying the proportion of images presented across trials. We found SWIFT, SSVEP and IM signals to differ in important ways. SSVEP was strongest over occipital electrodes and was not modified by certainty. Conversely, SWIFT signals were evident over temporo- and parieto-occipital areas and decreased as a function of increasing certainty levels. Finally, IMs were evident over occipital electrodes and increased as a function of certainty. These results link SSVEP, SWIFT and IM signals to sensory evidence, predictions, prediction errors and hypothesis-testing - the core elements of predictive coding. These findings provide neural evidence for the integration of top-down and bottom-up information in perception, opening new avenues to studying such interactions in perception while constraining neuronal models of predictive coding.SIGNIFICANCE STATEMENTThere is a growing understanding that both top-down and bottom-up signals underlie perception. But how do these signals interact? And how does this process depend on the signals’ probabilistic properties? ‘Predictive coding’ theories of perception describe this in terms how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories in particular. The novel Hierarchical Frequency Tagging method simultaneously tags top-down and bottom-up signals in EEG recordings, while obtaining a measure for the level of integration between these signals. Our results suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input.


2001 ◽  
Vol 39 (2-3) ◽  
pp. 137-150 ◽  
Author(s):  
S Karakaş ◽  
C Başar-Eroğlu ◽  
Ç Özesmi ◽  
H Kafadar ◽  
Ö.Ü Erzengin
Keyword(s):  
Top Down ◽  

2019 ◽  
Author(s):  
Yuru Song ◽  
Mingchen Yao ◽  
Helen Kemprecos ◽  
Áine Byrne ◽  
Zhengdong Xiao ◽  
...  

AbstractPain is a complex, multidimensional experience that involves dynamic interactions between sensory-discriminative and affective-emotional processes. Pain experiences have a high degree of variability depending on their context and prior anticipation. Viewing pain perception as a perceptual inference problem, we use a predictive coding paradigm to characterize both evoked and spontaneous pain. We record the local field potentials (LFPs) from the primary somatosensory cortex (S1) and the anterior cingulate cortex (ACC) of freely behaving rats—two regions known to encode the sensory-discriminative and affective-emotional aspects of pain, respectively. We further propose a framework of predictive coding to investigate the temporal coordination of oscillatory activity between the S1 and ACC. Specifically, we develop a high-level, empirical and phenomenological model to describe the macroscopic dynamics of bottom-up and top-down activity. Supported by recent experimental data, we also develop a mechanistic mean-field model to describe the mesoscopic population neuronal dynamics in the S1 and ACC populations, in both naive and chronic pain-treated animals. Our proposed predictive coding models not only replicate important experimental findings, but also provide new mechanistic insight into the uncertainty of expectation, placebo or nocebo effect, and chronic pain.Author SummaryPain perception in the mammalian brain is encoded through multiple brain circuits. The experience of pain is often associated with brain rhythms or neuronal oscillations at different frequencies. Understanding the temporal coordination of neural oscillatory activity from different brain regions is important for dissecting pain circuit mechanisms and revealing differences between distinct pain conditions. Predictive coding is a general computational framework to understand perceptual inference by integrating bottom-up sensory information and top-down expectation. Supported by experimental data, we propose a predictive coding framework for pain perception, and develop empirical and biologically-constrained computational models to characterize oscillatory dynamics of neuronal populations from two cortical circuits—one for the sensory-discriminative experience and the other for affective-emotional experience, and further characterize their temporal coordination under various pain conditions. Our computational study of biologically-constrained neuronal population model reveals important mechanistic insight on pain perception, placebo analgesia, and chronic pain.


2018 ◽  
Author(s):  
Christian D. Márton ◽  
Makoto Fukushima ◽  
Corrie R. Camalier ◽  
Simon R. Schultz ◽  
Bruno B. Averbeck

AbstractPredictive coding is a theoretical framework that provides a functional interpretation of top-down and bottom up interactions in sensory processing. The theory has suggested that specific frequency bands relay bottom-up and top-down information (e.g. “γ up, β down”). But it remains unclear whether this notion generalizes to cross-frequency interactions. Furthermore, most of the evidence so far comes from visual pathways. Here we examined cross-frequency coupling across four sectors of the auditory hierarchy in the macaque. We computed two measures of cross-frequency coupling, phase-amplitude coupling (PAC) and amplitude-amplitude coupling (AAC). Our findings revealed distinct patterns for bottom-up and top-down information processing among cross-frequency interactions. Both top-down and bottom-up made prominent use of low frequencies: low-to-low frequency (θ, α, β) and low frequency-to-high γ couplings were predominant top-down, while low frequency-to-low γ couplings were predominant bottom-up. These patterns were largely preserved across coupling types (PAC and AAC) and across stimulus types (natural and synthetic auditory stimuli), suggesting they are a general feature of information processing in auditory cortex. Moreover, our findings showed that low-frequency PAC alternated between predominantly top-down or bottom-up over time. Altogether, this suggests sensory information need not be propagated along separate frequencies upwards and downwards. Rather, information can be unmixed by having low frequencies couple to distinct frequency ranges in the target region, and by alternating top-down and bottom-up processing over time.1SignificanceThe brain consists of highly interconnected cortical areas, yet the patterns in directional cortical communication are not fully understood, in particular with regards to interactions between different signal components across frequencies. We employed a a unified, computationally advantageous Granger-causal framework to examine bi-directional cross-frequency interactions across four sectors of the auditory cortical hierarchy in macaques. Our findings extend the view of cross-frequency interactions in auditory cortex, suggesting they also play a prominent role in top-down processing. Our findings also suggest information need not be propagated along separate channels up and down the cortical hierarchy, with important implications for theories of information processing in the brain such as predictive coding.


Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


2013 ◽  
Vol 09 (02) ◽  
pp. 1350010 ◽  
Author(s):  
MATTEO CACCIOLA ◽  
GIANLUIGI OCCHIUTO ◽  
FRANCESCO CARLO MORABITO

Many computer vision problems consist of making a suitable content description of images usually aiming to extract the relevant information content. In case of images representing paintings or artworks, the information extracted is rather subject-dependent, thus escaping any universal quantification. However, we proposed a measure of complexity of such kinds of oeuvres which is related to brain processing. The artistic complexity measures the brain inability to categorize complex nonsense forms represented in modern art, in a dynamic process of acquisition that most involves top-down mechanisms. Here, we compare the quantitative results of our analysis on a wide set of paintings of various artists to the cues extracted from a standard bottom-up approach based on visual saliency concept. In every painting inspection, the brain searches for more informative areas at different scales, then connecting them in an attempt to capture the full impact of information content. Artistic complexity is able to quantify information which might have been individually lost in the fruition of a human observer thus identifying the artistic hand. Visual saliency highlights the most salient areas of the paintings standing out from their neighbours and grabbing our attention. Nevertheless, we will show that a comparison on the ways the two algorithms act, may manifest some interesting links, finally indicating an interplay between bottom-up and top-down modalities.


2016 ◽  
Author(s):  
Alla Brodski-Guerniero ◽  
Georg-Friedrich Paasch ◽  
Patricia Wollstadt ◽  
Ipek Özdemir ◽  
Joseph T. Lizier ◽  
...  

AbstractPredictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre-)activated prior knowledge serving these predictions are still unknown. Based on the idea that such pre-activated prior knowledge must be maintained until needed we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time-courses from magnetoencephalography (MEG) recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Pre-activation of prior knowledge for faces showed as alpha- and beta-band related AIS increases in content specific areas; these AIS increases were behaviourally relevant in brain area FFA. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Moreover, top-down transfer of predictions estimated by transfer entropy was associated with beta frequencies. Our results support accounts that activated prior knowledge and the corresponding predictions are signalled in low-frequency activity (<30 Hz).Significance statementOur perception is not only determined by the information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains like prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare – mostly because this kind of evidence requires making strong a-priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach we find that face-related prior knowledge and the derived predictions are represented and transferred in low-frequency brain activity.


2019 ◽  
Author(s):  
Cooper A. Smout ◽  
Matthew F. Tang ◽  
Marta I. Garrido ◽  
Jason B. Mattingley

AbstractThe human brain is thought to optimise the encoding of incoming sensory information through two principal mechanisms: prediction uses stored information to guide the interpretation of forthcoming sensory events, and attention prioritizes these events according to their behavioural relevance. Despite the ubiquitous contributions of attention and prediction to various aspects of perception and cognition, it remains unknown how they interact to modulate information processing in the brain. A recent extension of predictive coding theory suggests that attention optimises the expected precision of predictions by modulating the synaptic gain of prediction error units. Since prediction errors code for the difference between predictions and sensory signals, this model would suggest that attention increases the selectivity for mismatch information in the neural response to a surprising stimulus. Alternative predictive coding models proposes that attention increases the activity of prediction (or ‘representation’) neurons, and would therefore suggest that attention and prediction synergistically modulate selectivity for feature information in the brain. Here we applied multivariate forward encoding techniques to neural activity recorded via electroencephalography (EEG) as human observers performed a simple visual task, to test for the effect of attention on both mismatch and feature information in the neural response to surprising stimuli. Participants attended or ignored a periodic stream of gratings, the orientations of which could be either predictable, surprising, or unpredictable. We found that surprising stimuli evoked neural responses that were encoded according to the difference between predicted and observed stimulus features, and that attention facilitated the encoding of this type of information in the brain. These findings advance our understanding of how attention and prediction modulate information processing in the brain, and support the theory that attention optimises precision expectations during hierarchical inference by increasing the gain of prediction errors.


Sign in / Sign up

Export Citation Format

Share Document