scholarly journals Efficient Information Contents Flow Down from Memory to Predict the Identity of Faces

2017 ◽  
Author(s):  
Jiayu Zhan ◽  
Oliver G. B. Garrod ◽  
Nicola van Rijsbergen ◽  
Philippe G. Schyns

AbstractIn the sciences of cognition, an influential idea is that the brain makes predictions about incoming sensory information to reduce inherent ambiguity. In the visual hierarchy, this implies that information content originating in memory–the identity of a face–propagates down to disambiguate incoming stimulus information. However, understanding this powerful prediction-for-recognition mechanism will remain elusive until we uncover the content of the information propagating down from memory. Here, we address this foundational limitation with a task ubiquitous to humans–familiar face identification. We developed a unique computer graphics platform that combines a generative model of random face identity information with the subjectivity of perception. In 14 individual participants, we reverse engineered the predicted information contents propagating down from memory to identify 4 familiar faces. In a follow-up validation, we used the predicted face information to synthesize the identity of new faces and confirmed the causal role of the predictions in face identification. We show these predictions comprise both local 3D surface patches, such as a particularly thin and pointy nose combined with a square chin and a prominent brow, or more global surface characteristics, such as a longer or broader face. Further analyses reveal that the predicted contents are efficient because they represent objective features that maximally distinguish each identity from a model norm. Our results reveal the contents that propagate down the visual hierarchy from memory, showing this coding scheme is efficient and compatible with norm-based coding, with implications for mechanistic accounts of brain and machine intelligence.

2020 ◽  
Author(s):  
E Zamboni ◽  
VG Kemper ◽  
NR Goncalves ◽  
K Jia ◽  
VM Karlaftis ◽  
...  

AbstractAdapting to the environment statistics by reducing brain responses to repetitive sensory information is key for efficient information processing. Yet, the fine-scale computations that support this adaptive processing in the human brain remain largely unknown. Here, we capitalize on the sub-millimetre resolution afforded by ultra-high field imaging to examine BOLD-fMRI signals across cortical depth and discern competing hypotheses about the brain mechanisms (feedforward vs. feedback) that mediate adaptive visual processing. We demonstrate suppressive recurrent processing within visual cortex, as indicated by stronger BOLD decrease in superficial than middle and deeper layers for gratings that were repeatedly presented at the same orientation. Further, we show dissociable connectivity mechanisms for adaptive processing: enhanced feedforward connectivity within visual cortex, while feedback occipito-parietal connectivity, reflecting top-down influences on visual processing. Our findings provide evidence for a circuit of local recurrent and feedback interactions that mediate rapid brain plasticity for adaptive information processing.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Elisa Zamboni ◽  
Valentin G Kemper ◽  
Nuno Reis Goncalves ◽  
Ke Jia ◽  
Vasilis M Karlaftis ◽  
...  

Adapting to the environment statistics by reducing brain responses to repetitive sensory information is key for efficient information processing. Yet, the fine-scale computations that support this adaptive processing in the human brain remain largely unknown. Here, we capitalise on the sub-millimetre resolution of ultra-high field imaging to examine functional magnetic resonance imaging signals across cortical depth and discern competing hypotheses about the brain mechanisms (feedforward vs. feedback) that mediate adaptive processing. We demonstrate layer-specific suppressive processing within visual cortex, as indicated by stronger BOLD decrease in superficial and middle than deeper layers for gratings that were repeatedly presented at the same orientation. Further, we show altered functional connectivity for adaptation: enhanced feedforward connectivity from V1 to higher visual areas, short-range feedback connectivity between V1 and V2, and long-range feedback occipito-parietal connectivity. Our findings provide evidence for a circuit of local recurrent and feedback interactions that mediate rapid brain plasticity for adaptive information processing.


2018 ◽  
Vol 23 (3) ◽  
pp. 201-211
Author(s):  
Dewi Agushinta R ◽  
Fiena Rindani ◽  
Antonius Angga Kurniawan ◽  
Elevanita Anggari ◽  
Rizky Akbar

The creation of machines with human intelligence is an primary and beneficial aim of artificial intelligence research. One interesting method in developing artificial intelligence is combining a biological method and machine intelligence. Cyborg Intelligence is a new scientific model for the integration of biological and machinery. Brain Machine Interface (BMI) provides an opportunity to integrate both intelligence at various levels. Based on BMI, neural signals can be read for the control of motor actuators and sensory information coding machine can be sent to a specific area of the brain. In fact, Distributed Adaptive Control Theory of Mind and Brain technology is the most advanced brain-based cognitive architecture successfully applied in a wide range of robot tasks. It is expected that by analyzing the cyborg intelligence development can help and facilitate to enhance the knowledge of cyborg intelligence.


2018 ◽  
Author(s):  
Noam Gordon ◽  
Naotsugu Tsuchiya ◽  
Roger Koenig-Robert ◽  
Jakob Hohwy

AbstractPerception results from the integration of incoming sensory information with pre-existing information available in the brain. In this EEG (electroencephalography) study we utilised the Hierarchical Frequency Tagging method to examine how such integration is modulated by expectation and attention. Using intermodulation (IM) components as a measure of non-linear signal integration, we show in three different experiments that both expectation and attention enhance integration between top-down and bottom-up signals. Based on multispectral phase coherence, we present two direct physiological measures to demonstrate the distinct yet related mechanisms of expectation and attention. Specifically, our results link expectation to the modulation of prediction signals and the integration of top-down and bottom-up information at lower levels of the visual hierarchy. Meanwhile, they link attention to the propagation of ascending signals and the integration of information at higher levels of the visual hierarchy. These results are consistent with the predictive coding account of perception.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Frédéric Crevecoeur ◽  
Konrad P Kording

Humans perform saccadic eye movements two to three times per second. When doing so, the nervous system strongly suppresses sensory feedback for extended periods of time in comparison to movement time. Why does the brain discard so much visual information? Here we suggest that perceptual suppression may arise from efficient sensorimotor computations, assuming that perception and control are fundamentally linked. More precisely, we show theoretically that a Bayesian estimator should reduce the weight of sensory information around the time of saccades, as a result of signal dependent noise and of sensorimotor delays. Such reduction parallels the behavioral suppression occurring prior to and during saccades, and the reduction in neural responses to visual stimuli observed across the visual hierarchy. We suggest that saccadic suppression originates from efficient sensorimotor processing, indicating that the brain shares neural resources for perception and control.


2017 ◽  
Author(s):  
F. Crevecoeur ◽  
K. P. Kording

AbstractHumans perform saccadic eye movements two to three times per second. When doing so, the nervous system strongly suppresses sensory feedback for extended periods of time in comparison with the movement time. Why does the brain discard so much visual information? Here we suggest that perceptual suppression may arise from efficient sensorimotor computations, assuming that perception and control are fundamentally linked. More precisely, we show that a Bayesian estimator should reduce the weight of sensory information around the time of saccades, as a result of signal dependent noise and of sensorimotor delays. Such reduction parallels the behavioral suppression occurring prior to and during saccades, and the reduction in neural responses to visual stimuli observed across the visual hierarchy. We suggest that saccadic suppression originates from efficient sensorimotor processing, indicating that the brain shares neural resources for perception and control.


2020 ◽  
Vol 375 (1799) ◽  
pp. 20190705 ◽  
Author(s):  
Philippe G. Schyns ◽  
Jiayu Zhan ◽  
Rachael E. Jack ◽  
Robin A. A. Ince

The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where , when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization–stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed. This article is part of the Theo Murphy meeting issue ‘Memory reactivation: replaying events past, present and future’.


1999 ◽  
Vol 13 (2) ◽  
pp. 117-125 ◽  
Author(s):  
Laurence Casini ◽  
Françoise Macar ◽  
Marie-Hélène Giard

Abstract The experiment reported here was aimed at determining whether the level of brain activity can be related to performance in trained subjects. Two tasks were compared: a temporal and a linguistic task. An array of four letters appeared on a screen. In the temporal task, subjects had to decide whether the letters remained on the screen for a short or a long duration as learned in a practice phase. In the linguistic task, they had to determine whether the four letters could form a word or not (anagram task). These tasks allowed us to compare the level of brain activity obtained in correct and incorrect responses. The current density measures recorded over prefrontal areas showed a relationship between the performance and the level of activity in the temporal task only. The level of activity obtained with correct responses was lower than that obtained with incorrect responses. This suggests that a good temporal performance could be the result of an efficacious, but economic, information-processing mechanism in the brain. In addition, the absence of this relation in the anagram task results in the question of whether this relation is specific to the processing of sensory information only.


Sign in / Sign up

Export Citation Format

Share Document