scholarly journals Here's the twist: How the brain updates naturalistic event memories as our understanding of the past changes

2021 ◽  
Author(s):  
Asieh Zadbood ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Kenneth A. Norman ◽  
Uri Hasson

The brain actively reshapes past memories in light of new incoming information. In the current study, we ask how the brain supports this updatinge process during the encoding and recall of naturalistic stimuli. One group of participants watched a movie ("The Sixth Sense") with a cinematic "twist" at the end that dramatically changed the interpretation of previous events. Next, participants were asked to verbally recall the movie events, taking into account the new "twist" information. Most participants updated their recall to incorporate the twist. Two additional groups recalled the movie without having to update their memories during recall: one group never saw the twist; another group was exposed to the twist prior to the beginning of the movie, and thus the twist information was incorporated both during encoding and recall. We found that providing participants with information about the twist beforehand altered neural response patterns during movie-viewing in the default mode network (DMN). Moreover, presenting participants with the twist at the end of the movie changed the neural representation of the previously-encoded information during recall in a subset of DMN regions. Further evidence for this transformation was obtained by comparing the neural activation patterns during encoding and recall and correlating them with behavioral signatures of memory updating. Our results demonstrate that neural representations of past events encoded in the DMN are dynamically integrated with new information that reshapes our memory in natural contexts.

2021 ◽  
Author(s):  
Leonardo Fernandino ◽  
Lisa L. Conant ◽  
Colin J. Humphries ◽  
Jeffrey R. Binder

The nature of the neural code underlying conceptual knowledge remains a major unsolved problem in cognitive neuroscience. Three main types of information have been proposed as candidates for the neural representations of lexical concepts: taxonomic (i.e., information about category membership and inter-category relations), distributional (i.e., information about patterns of word co-occurrence in natural language use), and experiential (i.e., information about sensory-motor, affective, and other features of phenomenal experience engaged during concept acquisition). In two experiments, we investigated the extent to which these three types of information are encoded in the neural activation patterns associated with hundreds of English nouns from a wide variety of conceptual categories. Participants made familiarity judgments on the meaning of written nouns while undergoing functional MRI. A high-resolution, whole-brain activation map was generated for each noun in each participant′s native space. These word-specific activation maps were used to evaluate different representational spaces corresponding to the three types of information described above. In both studies, we found a striking advantage for experience-based models in most brain areas previously associated with concept representation. Partial correlation analyses revealed that only experiential information successfully predicted concept similarity structure when inter-model correlations were taken into account. This pattern of results was found independently for object concepts and event concepts. Our findings indicate that the neural representation of conceptual knowledge primarily encodes information about features of experience, and that - to the extent that it is represented in the brain - taxonomic and distributional information may rely on such an experience-based code.


2013 ◽  
Vol 347-350 ◽  
pp. 2516-2520
Author(s):  
Jian Hua Jiang ◽  
Xu Yu ◽  
Zhi Xing Huang

Over the last decade, functional magnetic resonance imaging (fMRI) has become a primary tool to predict the brain activity.During the past research, researchers transfer the focus from the picture to the word.The results of these researches are relatively successful. In this paper, several typical methods which are machine learning methods are introduced. And most of the methods are by using fMRI data associated with words features. The semantic features (properties or factors) support words neural representation, and have a certain commonality in the people.The purpose of the application of these methods is used for prediction or classification.


2020 ◽  
Author(s):  
David Badre ◽  
Apoorva Bhandari ◽  
Haley Keglovits ◽  
Atsushi Kikumoto

Cognitive control allows us to think and behave flexibly based on our context and goals. At the heart of theories of cognitive control is a control representation that enables the same input to produce different outputs contingent on contextual factors. In this review, we focus on an important property of the control representation’s neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. We will discuss the implications of this trade-off for cognitive control. We will then briefly review current neuroscience findings regarding the dimensionality of control representations in the brain, particularly the prefrontal cortex. We conclude by highlighting open questions and crucial directions for future research.


2017 ◽  
Author(s):  
Heini Saarimäki ◽  
Lara Farzaneh Ejtehadian ◽  
Enrico Glerean ◽  
liro P. Jääskeläinen ◽  
Patrik Vuilleumier ◽  
...  

The functional organization of human emotion systems as well as their neuroanatomical basis and segregation in the brain remains unresolved. Here we used pattern classification and hierarchical clustering to reveal and characterize the organization of discrete emotion categories in the human brain. We induced 14 emotions (6 “basic”, such as fear and anger; and 8 “non-basic”, such as shame and gratitude) and a neutral state in participants using guided mental imagery while their brain activity was measured with functional magnetic resonance imaging (fMRI). Twelve out of 14 emotions could be reliably classified from the fMRI signals. All emotions engaged a multitude of brain areas, primarily in midline cortices including anterior and posterior cingulate and precuneus, in subcortical regions, and in motor regions including cerebellum and premotor cortex. Similarity of subjective emotional experiences was associated with similarity of the corresponding neural activation patterns. We conclude that the emotions included in the study have discrete neural bases characterized by specific, distributed activation patterns in widespread cortical and subcortical circuits, and highlight both overlaps and differences in the locations of these for each emotion. Locally differentiated engagement of these globally shared circuits defines the unique neural fingerprint activity pattern and the corresponding subjective feeling associated with each emotion.


2021 ◽  
Author(s):  
John Philippe Paulus ◽  
Carlo Vignali ◽  
Marc N Coutanche

Associative inference, the process of drawing novel links between existing knowledge to rapidly integrate associated information, is supported by the hippocampus and neocortex. Within the neocortex, the medial prefrontal cortex (mPFC) has been implicated in the rapid cortical learning of new information that is congruent with an existing framework of knowledge, or schema. How the brain integrates associations to form inferences, specifically how inferences are represented, is not well understood. In this study, we investigate how the brain uses schemas to facilitate memory integration in an associative inference paradigm (A-B-C-D). We conducted two event-related fMRI experiments in which participants retrieved previously learned direct (AB, BC, CD) and inferred (AC, AD) associations between word pairs for items that are schema congruent or incongruent. Additionally, we investigated how two factors known to affect memory, a delay with sleep, and reward, modulate the neural integration of associations within, and between, schema. Schema congruency was found to benefit the integration of associates, but only when retrieval immediately follows learning. RSA revealed that neural patterns of inferred pairs (AC) in the PHc, mPFC, and posHPC were more similar to their constituents (AB and BC) when the items were schema congruent, suggesting that schema facilitates the assimilation of paired items into a single inferred unit containing all associated elements. Furthermore, a delay with sleep, but not reward, impacted the assimilation of inferred pairs. Our findings reveal that the neural representations of overlapping associations are integrated into novel representations through the support of memory schema.


2021 ◽  
Author(s):  
Rohan Saha ◽  
Jennifer Campbell ◽  
Janet F. Werker ◽  
Alona Fyshe

Infants start developing rudimentary language skills and can start understanding simple words well before their first birthday. This development has also been shown primarily using Event Related Potential (ERP) techniques to find evidence of word comprehension in the infant brain. While these works validate the presence of semantic representations of words (word meaning) in infants, they do not tell us about the mental processes involved in the manifestation of these semantic representations or the content of the representations. To this end, we use a decoding approach where we employ machine learning techniques on Electroencephalography (EEG) data to predict the semantic representations of words found in the brain activity of infants. We perform multiple analyses to explore word semantic representations in two groups of infants (9-month-old and 12-month-old). Our analyses show significantly above chance decodability of overall word semantics, word animacy, and word phonetics. As we analyze brain activity, we observe that participants in both age groups show signs of word comprehension immediately after word onset, marked by our model's significantly above chance word prediction accuracy. We also observed strong neural representations of word phonetics in the brain data for both age groups, some likely correlated to word decoding accuracy and others not. Lastly, we discover that the neural representations of word semantics are similar in both infant age groups. Our results on word semantics, phonetics, and animacy decodability, give us insights into the evolution of neural representation of word meaning in infants.


2010 ◽  
Vol 22 (7) ◽  
pp. 1570-1582 ◽  
Author(s):  
Vaidehi S. Natu ◽  
Fang Jiang ◽  
Abhijit Narvekar ◽  
Shaiyan Keshvari ◽  
Volker Blanz ◽  
...  

We examined the neural response patterns for facial identity independent of viewpoint and for viewpoint independent of identity. Neural activation patterns for identity and viewpoint were collected in an fMRI experiment. Faces appeared in identity-constant blocks, with variable viewpoint, and in viewpoint-constant blocks, with variable identity. Pattern-based classifiers were used to discriminate neural response patterns for all possible pairs of identities and viewpoints. To increase the likelihood of detecting distinct neural activation patterns for identity, we tested maximally dissimilar “face”–“antiface” pairs and normal face pairs. Neural response patterns for four of six identity pairs, including the “face”–“antiface” pairs, were discriminated at levels above chance. A behavioral experiment showed accord between perceptual and neural discrimination, indicating that the classifier tapped a high-level visual identity code. Neural activity patterns across a broad span of ventral temporal (VT) cortex, including fusiform gyrus and lateral occipital areas (LOC), were required for identity discrimination. For viewpoint, five of six viewpoint pairs were discriminated neurally. Viewpoint discrimination was most accurate with a broad span of VT cortex, but the neural and perceptual discrimination patterns differed. Less accurate discrimination of viewpoint, more consistent with human perception, was found in right posterior superior temporal sulcus, suggesting redundant viewpoint codes optimized for different functions. This study provides the first evidence that it is possible to dissociate neural activation patterns for identity and viewpoint independently.


2020 ◽  
Author(s):  
Tomoyasu Horikawa ◽  
Yukiyasu Kamitani

SummaryVisual image reconstruction from brain activity produces images whose features are consistent with the neural representations in the visual cortex given arbitrary visual instances [1–3], presumably reflecting the person’s visual experience. Previous reconstruction studies have been concerned either with how stimulus images are faithfully reconstructed or with whether mentally imagined contents can be reconstructed in the absence of external stimuli. However, many lines of vision research have demonstrated that even stimulus perception is shaped both by stimulus-induced processes and top-down processes. In particular, attention (or the lack of it) is known to profoundly affect visual experience [4–8] and brain activity [9–21]. Here, to investigate how top-down attention impacts the neural representation of visual images and the reconstructions, we use a state-of-the-art method (deep image reconstruction [3]) to reconstruct visual images from fMRI activity measured while subjects attend to one of two images superimposed with equally weighted contrasts. Deep image reconstruction exploits the hierarchical correspondence between the brain and a deep neural network (DNN) to translate (decode) brain activity into DNN features of multiple layers, and then create images that are consistent with the decoded DNN features [3, 22, 23]. Using the deep image reconstruction model trained on fMRI responses to single natural images, we decode brain activity during the attention trials. Behavioral evaluations show that the reconstructions resemble the attended rather than the unattended images. The reconstructions can be modeled by superimposed images with contrasts biased to the attended one, which are comparable to the appearance of the stimuli under attention measured in a separate session. Attentional modulations are found in a broad range of hierarchical visual representations and mirror the brain–DNN correspondence. Our results demonstrate that top-down attention counters stimulus-induced responses and modulate neural representations to render reconstructions in accordance with subjective appearance. The reconstructions appear to reflect the content of visual experience and volitional control, opening a new possibility of brain-based communication and creation.


2021 ◽  
Author(s):  
Molly Simmonite ◽  
Thad A Polk

According to the neural dedifferentiation hypothesis, age-related reductions in the distinctiveness of neural representations contribute to sensory, cognitive, and motor declines associated with aging: neural activity associated with different stimulus categories becomes more confusable with age and behavioural performance suffers as a result. Initial studies investigated age-related dedifferentiation in the visual cortex, but subsequent research has revealed declines in other brain regions, suggesting that dedifferentiation may be a general feature of the aging brain. In the present study, we used functional magnetic resonance imaging to investigate age-related dedifferentiation in the visual, auditory, and motor cortices. Participants were 58 young adults and 79 older adults. The similarity of activation patterns across different blocks of the same condition was calculated (within-condition correlation, a measure of reliability) as was the similarity of activation patterns elicited by different conditions (between-category correlations, a measure of confusability). Neural distinctiveness was defined as the difference between the mean within- and between-condition similarity. We found age-related reductions in neural distinctiveness in the visual, auditory, and motor cortices, which were driven by both decreases in within-category similarity and increases in between-category similarity. There were significant positive cross-region correlations between neural distinctiveness in different regions. These correlations were driven by within-category similarities, a finding that indicates that declines in the reliability of neural activity appear to occur in tandem across the brain. These findings suggest that the changes in neural distinctiveness that occur in healthy aging result from changes in both the reliability and confusability of patterns of neural activity.


2017 ◽  
Author(s):  
J. Brendan Ritchie ◽  
David Michael Kaplan ◽  
Colin Klein

AbstractSince its introduction, multivariate pattern analysis (MVPA), or “neural decoding”, has transformed the field of cognitive neuroscience. Underlying its influence is a crucial inference, which we call the Decoder’s Dictum: if information can be decoded from patterns of neural activity, then this provides strong evidence about what information those patterns represent. Although the Dictum is a widely held and well-motivated principle in decoding research, it has received scant philosophical attention. We critically evaluate the Dictum, arguing that it is false: decodability is a poor guide for revealing the content of neural representations. However, we also suggest how the Dictum can be improved on, in order to better justify inferences about neural representation using MVPA.


Sign in / Sign up

Export Citation Format

Share Document