scholarly journals Evidence for a reversal of the neural information flow between object perception and object reconstruction from memory

2018 ◽  
Author(s):  
Juan Linde-Domingo ◽  
Matthias S. Treder ◽  
Casper Kerren ◽  
Maria Wimber

AbstractRemembering is a reconstructive process. Surprisingly little is known about how the reconstruction of a memory unfolds in time in the human brain. We used reaction times and EEG time-series decoding to test the hypothesis that the information flow is reversed when an event is reconstructed from memory, compared to when the same event is initially being perceived. Across three experiments, we found highly consistent evidence supporting such a reversed stream. When seeing an object, low-level perceptual features were discriminated faster behaviourally, and could be decoded from brain activity earlier, than high-level conceptual features. This pattern reversed during associative memory recall, with reaction times and brain activity patterns now indicating that conceptual information was reconstructed more rapidly than perceptual details. Our findings support a neurobiologically plausible model of human memory, suggesting that memory retrieval is a hierarchical, multi-layered process that prioritizes semantically meaningful information over perceptual detail.

2016 ◽  
Author(s):  
Janice Chen ◽  
Yuan Chang Leong ◽  
Kenneth A Norman ◽  
Uri Hasson

Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.


Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lucy L. W. Owen ◽  
Thomas H. Chang ◽  
Jeremy R. Manning

AbstractOur thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain’s functional connectome that display homologous lower-level dynamic correlations. Here we test the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We develop an approach to estimating high-order dynamic correlations in timeseries data, and we apply the approach to neuroimaging data collected as human participants either listen to a ten-minute story or listen to a temporally scrambled version of the story. We train across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We find that classifiers trained to decode from high-order dynamic correlations yield the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.


2020 ◽  
pp. 003329411990034 ◽  
Author(s):  
Jacek Bielas ◽  
Łukasz Michalczyk

One of the well-documented behavioral changes that occur with advancing age is a decline in executive functioning, for example, attentional control. Age-related executive deficits are said to be associated with a deterioration of the frontal lobes. Neurofeedback is a training method which aims at acquiring self-control over certain brain activity patterns. It is considered as an effective approach to help improve attentional and self-management capabilities. However, studies evaluating the efficacy of neurofeedback training to boost executive functioning in an elderly population are still relatively rare and controversial. The aim of our study was to contribute to the assessment of the efficacy of neurofeedback as a method for enhancing executive functioning in the elderly. We provided a group of seniors with beta up-training (12–22 Hz), consisting of 20 sessions (30 minutes each), on the Cz site and tested its possible beneficiary influence on attentional control assessed by means of the Stroop and Simon tasks. The analysis of the subjects’ mean reaction times during consecutive tasks in the test and the retest, after implementation of neurofeedback training, showed a significant improvement. In contrast, the difference in reaction times between the test and the retest in the control group who had not been submitted to neurofeedback training was not significant.


2018 ◽  
Author(s):  
Mark Allen Thornton ◽  
Miriam E. Weaverdyck ◽  
Diana Tamir

Social life requires us to treat each person according to their unique disposition: habitually enthusiastic friends need occasional grounding, whereas pessimistic colleagues require cheering-up. To tailor our behavior to specific individuals, we must represent their idiosyncrasies. Here we advance a hypothesis about how the brain achieves this goal: our representations of other people reflect the mental states we perceive those people to habitually experience. That is, rather than representing other people via traits, our brains represent people as the sums of their states. For example, if a perceiver observes that another person is frequently cheerful, sometimes thoughtful, and rarely grumpy, the perceiver’s representation of that person will be comprised of their representations of the mental states cheerfulness, thoughtfulness, and grumpiness, combined in a corresponding ratio. We tested this hypothesis by measuring whether neural representations of people could be accurately reconstructed by summing state representations. Separate participants underwent functional neuroimaging while considering famous individuals and individual mental states. Online participants rated how often each famous person experiences each state. Results supported the summed state hypothesis: frequency-weighted sums of state-specific brain activity patterns accurately reconstructed person-specific patterns. Moreover, the summed state account outperformed the established alternative – that people represent others using trait dimensions – in explaining interpersonal similarity, as measured through neural patterns, explicit ratings, binary choices, reaction times, and the semantics of biographical text. Together these findings demonstrate that the brain represents other people as the sums of the mental states they are perceived to experience.


2022 ◽  
pp. 1-16
Author(s):  
Jamal A. Williams ◽  
Elizabeth H. Margulis ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Uri Hasson ◽  
...  

Abstract Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.


2019 ◽  
Author(s):  
Lucy L. W. Owen ◽  
Thomas H. Chang ◽  
Jeremy R. Manning

AbstractOur thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain’s functional connectome that display homologous lower-level dynamic correlations. We tested the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We developed an approach to estimating high-order dynamic correlations in timeseries data, and we applied the approach to neuroimaging data collected as human participants either listened to a ten-minute story or listened to a temporally scrambled version of the story. We trained across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We found that classifiers trained to decode from high-order dynamic correlations yielded the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.


2018 ◽  
Author(s):  
Markus Ostarek ◽  
Jeroen van Paridon ◽  
Falk Huettig

AbstractProcessing words with referents that are typically observed up or down in space (up/down words) influences the subsequent identification of visual targets in congruent locations. Eye-tracking studies have shown that up/down word comprehension shortens launch times of subsequent saccades to congruent locations and modulates concurrent saccade trajectories. This can be explained by a task-dependent interaction of semantic processing and oculomotor programs or by a direct recruitment of direction-specific processes in oculomotor and spatial systems as part of semantic processing. To test the latter possibility, we conducted a functional magnetic resonance imaging experiment and used multi-voxel pattern analysis to assess 1) whether the typical location of word referents can be decoded from the fronto-parietal spatial network and 2) whether activity patterns are shared between up/down words and up/down saccadic eye movements. In line with these hypotheses, significant decoding of up vs. down words and cross-decoding between up/down saccades and up/down words were observed in the frontal eye field region in the superior frontal sulcus and the inferior parietal lobule. Beyond these spatial attention areas, typical location of word referents could be decoded from a set of occipital, temporal, and frontal areas, indicating that interactions between high-level regions typically implicated with lexical-semantic processing and spatial/oculomotor regions constitute the neural basis for access to spatial aspects of word meanings.


Sign in / Sign up

Export Citation Format

Share Document