scholarly journals High-level cognition during story listening is reflected in high-order dynamic correlations in neural activity patterns

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lucy L. W. Owen ◽  
Thomas H. Chang ◽  
Jeremy R. Manning

AbstractOur thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain’s functional connectome that display homologous lower-level dynamic correlations. Here we test the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We develop an approach to estimating high-order dynamic correlations in timeseries data, and we apply the approach to neuroimaging data collected as human participants either listen to a ten-minute story or listen to a temporally scrambled version of the story. We train across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We find that classifiers trained to decode from high-order dynamic correlations yield the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.

2019 ◽  
Author(s):  
Lucy L. W. Owen ◽  
Thomas H. Chang ◽  
Jeremy R. Manning

AbstractOur thoughts arise from coordinated patterns of interactions between brain structures that change with our ongoing experiences. High-order dynamic correlations in neural activity patterns reflect different subgraphs of the brain’s functional connectome that display homologous lower-level dynamic correlations. We tested the hypothesis that high-level cognition is reflected in high-order dynamic correlations in brain activity patterns. We developed an approach to estimating high-order dynamic correlations in timeseries data, and we applied the approach to neuroimaging data collected as human participants either listened to a ten-minute story or listened to a temporally scrambled version of the story. We trained across-participant pattern classifiers to decode (in held-out data) when in the session each neural activity snapshot was collected. We found that classifiers trained to decode from high-order dynamic correlations yielded the best performance on data collected as participants listened to the (unscrambled) story. By contrast, classifiers trained to decode data from scrambled versions of the story yielded the best performance when they were trained using first-order dynamic correlations or non-correlational activity patterns. We suggest that as our thoughts become more complex, they are reflected in higher-order patterns of dynamic network interactions throughout the brain.


2017 ◽  
Vol 24 (3) ◽  
pp. 277-293 ◽  
Author(s):  
Selen Atasoy ◽  
Gustavo Deco ◽  
Morten L. Kringelbach ◽  
Joel Pearson

A fundamental characteristic of spontaneous brain activity is coherent oscillations covering a wide range of frequencies. Interestingly, these temporal oscillations are highly correlated among spatially distributed cortical areas forming structured correlation patterns known as the resting state networks, although the brain is never truly at “rest.” Here, we introduce the concept of harmonic brain modes—fundamental building blocks of complex spatiotemporal patterns of neural activity. We define these elementary harmonic brain modes as harmonic modes of structural connectivity; that is, connectome harmonics, yielding fully synchronous neural activity patterns with different frequency oscillations emerging on and constrained by the particular structure of the brain. Hence, this particular definition implicitly links the hitherto poorly understood dimensions of space and time in brain dynamics and its underlying anatomy. Further we show how harmonic brain modes can explain the relationship between neurophysiological, temporal, and network-level changes in the brain across different mental states ( wakefulness, sleep, anesthesia, psychedelic). Notably, when decoded as activation of connectome harmonics, spatial and temporal characteristics of neural activity naturally emerge from the interplay between excitation and inhibition and this critical relation fits the spatial, temporal, and neurophysiological changes associated with different mental states. Thus, the introduced framework of harmonic brain modes not only establishes a relation between the spatial structure of correlation patterns and temporal oscillations (linking space and time in brain dynamics), but also enables a new dimension of tools for understanding fundamental principles underlying brain dynamics in different states of consciousness.


2019 ◽  
Author(s):  
Fabio Boi ◽  
Nikolas Perentos ◽  
Aziliz Lecomte ◽  
Gerrit Schwesig ◽  
Stefano Zordan ◽  
...  

AbstractThe advent of implantable active dense CMOS neural probes opened a new era for electrophysiology in neuroscience. These single shank electrode arrays, and the emerging tailored analysis tools, provide for the first time to neuroscientists the neurotechnology means to spatiotemporally resolve the activity of hundreds of different single-neurons in multiple vertically aligned brain structures. However, while these unprecedented experimental capabilities to study columnar brain properties are a big leap forward in neuroscience, there is the need to spatially distribute electrodes also horizontally. Closely spacing and consistently placing in well-defined geometrical arrangement multiple isolated single-shank probes is methodologically and economically impractical. Here, we present the first high-density CMOS neural probe with multiple shanks integrating thousand’s of closely spaced and simultaneously recording microelectrodes to map neural activity across 2D lattice. Taking advantage from the high-modularity of our electrode-pixels-based SiNAPS technology, we realized a four shanks active dense probe with 256 electrode-pixels/shank and a pitch of 28 µm, for a total of 1024 simultaneously recording channels. The achieved performances allow for full-band, whole-array read-outs at 25 kHz/channel, show a measured input referred noise in the action potential band (300-7000 Hz) of 6.5 ± 2.1µVRMS, and a power consumption <6 µW/electrode-pixel. Preliminary recordings in awake behaving mice demonstrated the capability of multi-shanks SiNAPS probes to simultaneously record neural activity (both LFPs and spikes) from a brain area >6 mm2, spanning cortical, hippocampal and thalamic regions. High-density 2D array enables combining large population unit recording across distributed networks with precise intra- and interlaminar/nuclear mapping of the oscillatory dynamics. These results pave the way to a new generation of high-density and extremely compact multi-shanks CMOS-probes with tunable layouts for electrophysiological mapping of brain activity at the single-neurons resolution.


2016 ◽  
Author(s):  
Janice Chen ◽  
Yuan Chang Leong ◽  
Kenneth A Norman ◽  
Uri Hasson

Our daily lives revolve around sharing experiences and memories with others. When different people recount the same events, how similar are their underlying neural representations? In this study, participants viewed a fifty-minute audio-visual movie, then verbally described the events while undergoing functional MRI. These descriptions were completely unguided and highly detailed, lasting for up to forty minutes. As each person spoke, event-specific spatial patterns were reinstated (movie-vs.-recall correlation) in default network, medial temporal, and high-level visual areas; moreover, individual event patterns were highly discriminable and similar between people during recollection (recall-vs.-recall similarity), suggesting the existence of spatially organized memory representations. In posterior medial cortex, medial prefrontal cortex, and angular gyrus, activity patterns during recall were more similar between people than to patterns elicited by the movie, indicating systematic reshaping of percept into memory across individuals. These results reveal striking similarity in how neural activity underlying real-life memories is organized and transformed in the brains of different people as they speak spontaneously about past events.


Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Katherine Storrs ◽  
Adrian Lloyd Williams ◽  
Carolyn McGettigan ◽  
...  

AbstractFaces of different people elicit distinct functional MRI (fMRI) patterns in several face-selective brain regions. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). We used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. Models included low-level to high-level image-computable properties and complex human-rated properties. We found that the FFA representation reflected perceived face similarity, social traits, and gender, and was well accounted for by the OpenFace model (deep neural network, trained to cluster faces by identity). The OFA encoded low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.


2018 ◽  
Author(s):  
Juan Linde-Domingo ◽  
Matthias S. Treder ◽  
Casper Kerren ◽  
Maria Wimber

AbstractRemembering is a reconstructive process. Surprisingly little is known about how the reconstruction of a memory unfolds in time in the human brain. We used reaction times and EEG time-series decoding to test the hypothesis that the information flow is reversed when an event is reconstructed from memory, compared to when the same event is initially being perceived. Across three experiments, we found highly consistent evidence supporting such a reversed stream. When seeing an object, low-level perceptual features were discriminated faster behaviourally, and could be decoded from brain activity earlier, than high-level conceptual features. This pattern reversed during associative memory recall, with reaction times and brain activity patterns now indicating that conceptual information was reconstructed more rapidly than perceptual details. Our findings support a neurobiologically plausible model of human memory, suggesting that memory retrieval is a hierarchical, multi-layered process that prioritizes semantically meaningful information over perceptual detail.


2008 ◽  
Vol 20 (3) ◽  
pp. 389-399 ◽  
Author(s):  
Philipp Sterzer ◽  
Geraint Rees

When the same visual input has conflicting interpretations, conscious perception can alternate spontaneously between each competing percept. Surprisingly, such bistable perception can be stabilized by intermittent stimulus removal, suggesting the existence of perceptual “memory” across interruptions in stimulation. The neural basis of such a process remains unknown. Here, we studied binocular rivalry, one type of bistable perception, in two linked experiments in human participants. First, we showed, in a behavioral experiment using binocular rivalry between face and grating stimuli, that the stabilizing effect of stimulus removal was specific to perceptual alternations evoked by rivalry, and did not occur following physical alternations in the absence of rivalry. We then used functional magnetic resonance imaging to measure brain activity in a variable delay period of stimulus removal. Activity in the fusiform face area during the delay period following removal of rivalrous stimuli was greater following face than grating perception, whereas such a difference was absent during removal of non-rivalrous stimuli. Moreover, activity in areas of fronto-parietal regions during the delay period correlated with the degree to which individual participants tended to experience percept stabilization. Our findings suggest that percept-related activity in specialized extrastriate visual areas help to stabilize perception during perceptual conflict, and that high-level mechanisms may determine the influence of such signals on conscious perception.


2021 ◽  
Author(s):  
Ravi D. Mill ◽  
Julia L. Hamilton ◽  
Emily C. Winfield ◽  
Nicole Lalta ◽  
Richard H. Chen ◽  
...  

AbstractHow cognitive task information emerges from brain activity is a central question in neuroscience. We identified the spatiotemporal emergence of task information in the human brain using individualized source electroencephalography and dynamic multivariate pattern analysis. We then substantially extended recently developed brain activity flow models to predict the future emergence of task information dynamics. The model simulated the flow of task-evoked activity over causally interpretable resting-state functional connections (dynamic, lagged, direct and directional) to accurately predict response information dynamics underlying cognitive task behavior. Predicting event-related spatiotemporal activity patterns and fine-grained representational geometry confirmed the model’s faithfulness to how the brain veridically represents response information. Simulated network “lesioning” revealed cognitive control networks (CCNs) as the dominant causal drivers of response information flow. These results demonstrate the efficacy of dynamic activity flow models in predicting the emergence of task information, thereby revealing a mechanistic role for CCNs in producing behavior.


2021 ◽  
Author(s):  
Isaac David ◽  
Fernando A Barrios

It's now common to approach questions about information representation in the brain using multivariate statistics and machine learning methods. What is less recognized is that, in the process, the capacity for data-driven discovery and functional localization has diminished. This is because multivariate pattern analysis (MVPA) studies tend to restrict themselves to regions of interest and severely-filtered data, and sound parameter mapping inference is lacking. Here, reproducible evidence is presented that a high-dimensional, brain-wide multivariate linear method can better detect and characterize the occurrence of visual and socio-affective states in a task-oriented functional magnetic resonance imaging (fMRI) experiment; in comparison to the classical localizationist correlation analysis. Classification models for a group of human participants and existing rigorous cluster inference methods are used to construct group anatomical-statistical parametric maps, which correspond to the most likely neural correlates of each psychological state. This led to the discovery of a multidimensional pattern of brain activity which reliably encodes for the perception of happiness in the visual cortex, cerebellum and some limbic areas. We failed to find similar evidence for sadness and anger. Anatomical consistency of discriminating features across subjects and contrasts despite of the high number of dimensions, as well as agreement with the wider literature, suggest MVPA is a viable tool for full-brain functional neuroanatomical mapping and not just prediction of psychological states. The present work paves the way for future functional brain imaging studies to provide a complementary picture of brain functions (such as emotion), according to their macroscale dynamics.


2021 ◽  
Author(s):  
Yu Zhang ◽  
Nicolas et Farrugia ◽  
Alain Dagher ◽  
Pierre Bellec

Brain decoding aims to infer human cognition from recordings of neural activity using modern neuroimaging techniques. Studies so far often concentrated on a limited number of cognitive states and aimed to classifying patterns of brain activity within a local area. This procedure demonstrated a great success on classifying motor and sensory processes but showed limited power over higher cognitive functions. In this work, we investigate a high-order graph convolution model, named ChebNet, to model the segregation and integration organizational principles in neural dynamics, and to decode brain activity across a large number of cognitive domains. By leveraging our prior knowledge on brain organization using a graph-based model, ChebNet graph convolution learns a new representation from task-evoked neural activity, which demonstrates a highly predictive signature of cognitive states and task performance. Our results reveal that between-network integration significantly boosts the decoding of high-order cognition such as visual working memory tasks, while the segregation of localized brain activity is sufficient to classify motor and sensory processes. Using twin and family data from the Human Connectome Project (n = 1,070), we provide evidence that individual variability in the graph representations of working-memory tasks are under genetic control and strongly associated with participants in-scanner behaviors. These findings uncover the essential role of functional integration in brain decoding, especially when decoding high-order cognition other than sensory and motor functions.


Sign in / Sign up

Export Citation Format

Share Document