scholarly journals Shared experience, shared memory: a common structure for brain activity during naturalistic recall

2016 ◽  
Author(s):  
Janice Chen ◽  
Yuan Chang Leong ◽  
Kenneth A Norman ◽  
Uri Hasson

Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.

2022 ◽  
pp. 1-16
Author(s):  
Jamal A. Williams ◽  
Elizabeth H. Margulis ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Uri Hasson ◽  
...  

Abstract Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Meir Meshulam ◽  
Liat Hasenfratz ◽  
Hanna Hillman ◽  
Yun-Fei Liu ◽  
Mai Nguyen ◽  
...  

AbstractDespite major advances in measuring human brain activity during and after educational experiences, it is unclear how learners internalize new content, especially in real-life and online settings. In this work, we introduce a neural approach to predicting and assessing learning outcomes in a real-life setting. Our approach hinges on the idea that successful learning involves forming the right set of neural representations, which are captured in canonical activity patterns shared across individuals. Specifically, we hypothesized that learning is mirrored in neural alignment: the degree to which an individual learner’s neural representations match those of experts, as well as those of other learners. We tested this hypothesis in a longitudinal functional MRI study that regularly scanned college students enrolled in an introduction to computer science course. We additionally scanned graduate student experts in computer science. We show that alignment among students successfully predicts overall performance in a final exam. Furthermore, within individual students, we find better learning outcomes for concepts that evoke better alignment with experts and with other students, revealing neural patterns associated with specific learned concepts in individuals.


Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2018 ◽  
Author(s):  
Juan Linde-Domingo ◽  
Matthias S. Treder ◽  
Casper Kerren ◽  
Maria Wimber

AbstractRemembering is a reconstructive process. Surprisingly little is known about how the reconstruction of a memory unfolds in time in the human brain. We used reaction times and EEG time-series decoding to test the hypothesis that the information flow is reversed when an event is reconstructed from memory, compared to when the same event is initially being perceived. Across three experiments, we found highly consistent evidence supporting such a reversed stream. When seeing an object, low-level perceptual features were discriminated faster behaviourally, and could be decoded from brain activity earlier, than high-level conceptual features. This pattern reversed during associative memory recall, with reaction times and brain activity patterns now indicating that conceptual information was reconstructed more rapidly than perceptual details. Our findings support a neurobiologically plausible model of human memory, suggesting that memory retrieval is a hierarchical, multi-layered process that prioritizes semantically meaningful information over perceptual detail.


2018 ◽  
Author(s):  
Christopher Baldassano ◽  
Uri Hasson ◽  
Kenneth A. Norman

AbstractUnderstanding movies and stories requires maintaining a high-level situation model that abstracts away from perceptual details to describe the location, characters, actions, and causal relationships of the currently unfolding event. These models are built not only from information present in the current narrative, but also from prior knowledge about schematic event scripts, which describe typical event sequences encountered throughout a lifetime. We analyzed fMRI data from 44 human subjects presented with sixteen three-minute stories, consisting of four schematic events drawn from two different scripts (eating at a restaurant or going through the airport). Aside from this shared script structure, the stories varied widely in terms of their characters and storylines, and were presented in two highly dissimilar formats (audiovisual clips or spoken narration). One group was presented with the stories in an intact temporal sequence, while a separate control group was presented with the same events in scrambled order. Regions including the posterior medial cortex, medial prefrontal cortex (mPFC), and superior frontal gyrus exhibited schematic event patterns that generalized across stories, subjects, and modalities. Patterns in mPFC were also sensitive to overall script structure, with temporally scrambled events evoking weaker schematic representations. Using a Hidden Markov Model, patterns in these regions can predict the script (restaurant vs. airport) of unlabeled data with high accuracy, and can be used to temporally align multiple stories with a shared script. These results extend work on the perception of controlled, artificial schemas in human and animal experiments to naturalistic perception of complex narrative stimuli.Significance StatementIn almost all situations we encounter in our daily lives, we are able to draw on our schematic knowledge about what typically happens in the world to better perceive and mentally represent our ongoing experiences. In contrast to previous studies that investigated schematic cognition using simple, artificial associations, we measured brain activity from subjects watching movies and listening to stories depicting restaurant or airport experiences. Our results reveal a network of brain regions that is sensitive to the shared temporal structure of these naturalistic situations. These regions abstract away from the particular details of each story, activating a representation of the general type of situation being perceived.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lucy L. W. Owen ◽  
Thomas H. Chang ◽  
Jeremy R. Manning

AbstractOur thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain’s functional connectome that display homologous lower-level dynamic correlations. Here we test the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We develop an approach to estimating high-order dynamic correlations in timeseries data, and we apply the approach to neuroimaging data collected as human participants either listen to a ten-minute story or listen to a temporally scrambled version of the story. We train across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We find that classifiers trained to decode from high-order dynamic correlations yield the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.


2018 ◽  
Author(s):  
Mathieu Bourguignon ◽  
Martijn Baart ◽  
Efthymia C. Kapnoula ◽  
Nicola Molinaro

AbstractLip-reading is crucial to understand speech in challenging conditions. Neuroimaging investigations have revealed that lip-reading activates auditory cortices in individuals covertly repeating absent—but known—speech. However, in real-life, one usually has no detailed information about the content of upcoming speech. Here we show that during silent lip-reading of unknown speech, activity in auditory cortices entrains more to absent speech than to seen lip movements at frequencies below 1 Hz. This entrainment to absent speech was characterized by a speech-to-brain delay of 50–100 ms as when actually listening to speech. We also observed entrainment to lip movements at the same low frequency in the right angular gyrus, an area involved in processing biological motion. These findings demonstrate that the brain can synthesize high-level features of absent unknown speech sounds from lip-reading that can facilitate the processing of the auditory input. Such a synthesis process may help explain well-documented bottom-up perceptual effects.


2016 ◽  
Vol 18 (1) ◽  
pp. 55-63 ◽  

Decision making has been extensively studied in the context of economics and from a group perspective, but still little is known on individual decision making. Here we discuss the different cognitive processes involved in decision making and its associated neural substrates. The putative conductors in decision making appear to be the prefrontal cortex and the striatum. Impaired decision-making skills in various clinical populations have been associated with activity in the prefrontal cortex and in the striatum. We highlight the importance of strengthening the degree of integration of both cognitive and neural substrates in order to further our understanding of decision-making skills. In terms of cognitive paradigms, there is a need to improve the ecological value of experimental tasks that assess decision making in various contexts and with rewards; this would help translate laboratory learnings into real-life benefits. In terms of neural substrates, the use of neuroimaging techniques helps characterize the neural networks associated with decision making; more recently, ways to modulate brain activity, such as in the prefrontal cortex and connected regions (eg, striatum), with noninvasive brain stimulation have also shed light on the neural and cognitive substrates of decision making. Together, these cognitive and neural approaches might be useful for patients with impaired decision-making skills. The drive behind this line of work is that decision-making abilities underlie important aspects of wellness, health, security, and financial and social choices in our daily lives.


2019 ◽  
Author(s):  
Alina Leminen ◽  
Maxime Verwoert ◽  
Mona Moisala ◽  
Viljami Salmela ◽  
Patrik Wikman ◽  
...  

AbstractIn real-life noisy situations, we can selectively attend to conversations in the presence of irrelevant voices, but neurocognitive mechanisms in such natural listening situations remaiin largely unexplored. Previous research has shown distributed activity in the mid superior temporal gyrus (STG) and sulcus (STS) while listening to speech and human voices, in the posterior STS and fusiform gyrus when combining auditory, visual and linguistic information, as well as in lefthemisphere temporal and frontal cortical areas during comprehension. In the present functional magnetic resonance imaging (fMRI) study, we investigated how selective attention modulates neural responses to naturalistic audiovisual dialogues. Our healthy adult participants (N = 15) selectively attended to video-taped dialogues between a man and woman in the presence of irrelevant continuous speech in the background. We modulated the auditory quality of dialogues with noise vocoding and their visual quality by masking speech-related facial movements. Both increased auditory quality and increased visual quality were associated with bilateral activity enhancements in the STG/STS. In addition, decreased audiovisual stimulus quality elicited enhanced fronto-parietal activity, presumably reflecting increased attentional demands. Finally, attention to the dialogues, in relation to a control task where a fixation cross was attended and the dialogue ignored, yielded enhanced activity in the left planum polare, angular gyrus, the right temporal pole, as well as in the orbitofrontal/ventromedial prefrontal cortex and posterior cingulate gyrus. Our findings suggest that naturalistic conversations effectively engage participants and reveal brain networks related to social perception in addition to speech and semantic processing networks.


2019 ◽  
Author(s):  
Lucy L. W. Owen ◽  
Thomas H. Chang ◽  
Jeremy R. Manning

AbstractOur thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain’s functional connectome that display homologous lower-level dynamic correlations. We tested the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We developed an approach to estimating high-order dynamic correlations in timeseries data, and we applied the approach to neuroimaging data collected as human participants either listened to a ten-minute story or listened to a temporally scrambled version of the story. We trained across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We found that classifiers trained to decode from high-order dynamic correlations yielded the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.


Sign in / Sign up

Export Citation Format

Share Document