scholarly journals Neural differentiation of incorrectly predicted memories

2016 ◽  
Author(s):  
Ghootae Kim ◽  
Kenneth A. Norman ◽  
Nicholas B. Turk-Browne

AbstractWhen an item is predicted in a particular context but the prediction is violated, memory for that item is weakened (Kim et al., 2014). Here we explore what happens when such previously mispredicted items are later re-encountered. According to prior neural network simulations, this sequence of events - misprediction and subsequent restudy - should lead to differentiation of the item's neural representation from the previous context (on which the misprediction was based). Specifically, misprediction weakens connections in the representation to features shared with the previous context, and restudy allows new features to be incorporated into the representation that are not shared with the previous context. This cycle of misprediction and restudy should have the net effect of moving the item‘s neural representation away from the neural representation of the previous context. We tested this hypothesis using fMRI, by tracking changes in item-specific BOLD activity patterns in the hippocampus, a key structure for representing memories and generating predictions. In left CA2/3/DG, we found greater neural differentiation for items that were repeatedly mispredicted and restudied compared to items from a control condition that was identical except without misprediction. We also measured prediction strength in a trial-by-trial fashion and found that greater misprediction for an item led to more differentiation, further supporting our hypothesis. Thus, the consequences of prediction error go beyond memory weakening: If the mispredicted item is restudied, the brain adaptively differentiates its memory representation to improve the accuracy of subsequent predictions and to shield it from further weakening.SignificanceCompetition between overlapping memories leads to weakening of non-target memories over time, making it easier to access target memories. However, a non-target memory in one context might become a target memory in another context. How do such memories get re-strengthened without increasing competition again? Computational models suggest that the brain handles this by reducing neural connections to the previous context and adding connections to new features that were not part of the previous context. The result is neural differentiation away from the previous context. Here provide support for this theory, using fMRI to track neural representations of individual memories in the hippocampus and how they change based on learning.

2021 ◽  
Vol 376 (1821) ◽  
pp. 20190765 ◽  
Author(s):  
Giovanni Pezzulo ◽  
Joshua LaPalme ◽  
Fallon Durant ◽  
Michael Levin

Nervous systems’ computational abilities are an evolutionary innovation, specializing and speed-optimizing ancient biophysical dynamics. Bioelectric signalling originated in cells' communication with the outside world and with each other, enabling cooperation towards adaptive construction and repair of multicellular bodies. Here, we review the emerging field of developmental bioelectricity, which links the field of basal cognition to state-of-the-art questions in regenerative medicine, synthetic bioengineering and even artificial intelligence. One of the predictions of this view is that regeneration and regulative development can restore correct large-scale anatomies from diverse starting states because, like the brain, they exploit bioelectric encoding of distributed goal states—in this case, pattern memories. We propose a new interpretation of recent stochastic regenerative phenotypes in planaria, by appealing to computational models of memory representation and processing in the brain. Moreover, we discuss novel findings showing that bioelectric changes induced in planaria can be stored in tissue for over a week, thus revealing that somatic bioelectric circuits in vivo can implement a long-term, re-writable memory medium. A consideration of the mechanisms, evolution and functionality of basal cognition makes novel predictions and provides an integrative perspective on the evolution, physiology and biomedicine of information processing in vivo . This article is part of the theme issue ‘Basal cognition: multicellularity, neurons and the cognitive lens’.


2018 ◽  
Author(s):  
Kristjan Kalm ◽  
Dennis Norris

AbstractWe contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patters from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.


Author(s):  
Andrew J. Anderson ◽  
Douwe Kiela ◽  
Stephen Clark ◽  
Massimo Poesio

Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.


2021 ◽  
Vol 17 (5) ◽  
pp. e1008969
Author(s):  
Kristjan Kalm ◽  
Dennis Norris

We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patters from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations.


2022 ◽  
Vol 12 ◽  
Author(s):  
Inês Hipólito

This paper proposes an account of neurocognitive activity without leveraging the notion of neural representation. Neural representation is a concept that results from assuming that the properties of the models used in computational cognitive neuroscience (e.g., information, representation, etc.) must literally exist the system being modelled (e.g., the brain). Computational models are important tools to test a theory about how the collected data (e.g., behavioural or neuroimaging) has been generated. While the usefulness of computational models is unquestionable, it does not follow that neurocognitive activity should literally entail the properties construed in the model (e.g., information, representation). While this is an assumption present in computationalist accounts, it is not held across the board in neuroscience. In the last section, the paper offers a dynamical account of neurocognitive activity with Dynamical Causal Modelling (DCM) that combines dynamical systems theory (DST) mathematical formalisms with the theoretical contextualisation provided by Embodied and Enactive Cognitive Science (EECS).


Author(s):  
Elizabeth Musz ◽  
Sharon L. Thompson-Schill

Semantic memory is composed of one’s accumulated world knowledge. This includes one’s stored factual information about the real-world objects and animals, which enables one to recognize and interact with the things in one’s environment. How is this semantic information organized, and where is it stored in the brain? Newly developed functional neuroimaging (fMRI) methods have provided exciting and innovative approaches to studying these questions. In particular, several recent fMRI investigations have examined the neural bases of semantic knowledge using similarity-based approaches. In similarity models, data from direct (i.e., neural) and indirect (i.e., subjective, psychological) measurements are interpreted as proximity data that provide information about the relationships among object concepts in an abstract, high-dimensional space. Concepts are encoded as points in this conceptual space, such that the semantic relatedness between two concepts is determined by their distance from one another. Using this approach, neuroimaging studies have offered compelling insights to several open-ended questions about how object concepts are represented in the brain. This chapter briefly describes how similarity spaces are computed from both behavioral data and spatially distributed fMRI activity patterns. Then, it reviews empirical reports that relate observed neural similarity spaces to various models of semantic similarity. The chapter examines how these methods have both shaped and informed our current understanding of the neural representation of conceptual information about real-world objects.


2021 ◽  
pp. 1-15
Author(s):  
Konstantinos Bromis ◽  
Petar P. Raykov ◽  
Leah Wickens ◽  
Warrick Roseboom ◽  
Chris M. Bird

Abstract An episodic memory is specific to an event that occurred at a particular time and place. However, the elements that comprise the event—the location, the people present, and their actions and goals—might be shared with numerous other similar events. Does the brain preferentially represent certain elements of a remembered event? If so, which elements dominate its neural representation: those that are shared across similar events, or the novel elements that define a specific event? We addressed these questions by using a novel experimental paradigm combined with fMRI. Multiple events were created involving conversations between two individuals using the format of a television chat show. Chat show “hosts” occurred repeatedly across multiple events, whereas the “guests” were unique to only one event. Before learning the conversations, participants were scanned while viewing images or names of the (famous) individuals to be used in the study to obtain person-specific activity patterns. After learning all the conversations over a week, participants were scanned for a second time while they recalled each event multiple times. We found that during recall, person-specific activity patterns within the posterior midline network were reinstated for the hosts of the shows but not the guests, and that reinstatement of the hosts was significantly stronger than the reinstatement of the guests. These findings demonstrate that it is the more generic, familiar, and predictable elements of an event that dominate its neural representation compared with the more idiosyncratic, event-defining, elements.


Antioxidants ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 229
Author(s):  
JunHyuk Woo ◽  
Hyesun Cho ◽  
YunHee Seol ◽  
Soon Ho Kim ◽  
Chanhyeok Park ◽  
...  

The brain needs more energy than other organs in the body. Mitochondria are the generator of vital power in the living organism. Not only do mitochondria sense signals from the outside of a cell, but they also orchestrate the cascade of subcellular events by supplying adenosine-5′-triphosphate (ATP), the biochemical energy. It is known that impaired mitochondrial function and oxidative stress contribute or lead to neuronal damage and degeneration of the brain. This mini-review focuses on addressing how mitochondrial dysfunction and oxidative stress are associated with the pathogenesis of neurodegenerative disorders including Alzheimer’s disease, amyotrophic lateral sclerosis, Huntington’s disease, and Parkinson’s disease. In addition, we discuss state-of-the-art computational models of mitochondrial functions in relation to oxidative stress and neurodegeneration. Together, a better understanding of brain disease-specific mitochondrial dysfunction and oxidative stress can pave the way to developing antioxidant therapeutic strategies to ameliorate neuronal activity and prevent neurodegeneration.


Cells ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 883 ◽  
Author(s):  
Debajyoti Chowdhury ◽  
Chao Wang ◽  
Ai-Ping Lu ◽  
Hai-Long Zhu

Circadian rhythms have a deep impact on most aspects of physiology. In most organisms, especially mammals, the biological rhythms are maintained by the indigenous circadian clockwork around geophysical time (~24-h). These rhythms originate inside cells. Several core components are interconnected through transcriptional/translational feedback loops to generate molecular oscillations. They are tightly controlled over time. Also, they exert temporal controls over many fundamental physiological activities. This helps in coordinating the body’s internal time with the external environments. The mammalian circadian clockwork is composed of a hierarchy of oscillators, which play roles at molecular, cellular, and higher levels. The master oscillation has been found to be developed at the hypothalamic suprachiasmatic nucleus in the brain. It acts as the core pacemaker and drives the transmission of the oscillation signals. These signals are distributed across different peripheral tissues through humoral and neural connections. The synchronization among the master oscillator and tissue-specific oscillators offer overall temporal stability to mammals. Recent technological advancements help us to study the circadian rhythms at dynamic scale and systems level. Here, we outline the current understanding of circadian clockwork in terms of molecular mechanisms and interdisciplinary concepts. We have also focused on the importance of the integrative approach to decode several crucial intricacies. This review indicates the emergence of such a comprehensive approach. It will essentially accelerate the circadian research with more innovative strategies, such as developing evidence-based chronotherapeutics to restore de-synchronized circadian rhythms.


2016 ◽  
Vol 371 (1705) ◽  
pp. 20160278 ◽  
Author(s):  
Nikolaus Kriegeskorte ◽  
Jörn Diedrichsen

High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in functional magnetic resonance imaging (fMRI) voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. To avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic representational similarity analysis (pRSA) with MMs, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects, and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognize the data-generating model in each case. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’.


Sign in / Sign up

Export Citation Format

Share Document