scholarly journals Scale-free behavioral dynamics directly linked with scale-free cortical dynamics

2021 ◽  
Author(s):  
Sabrina A Jones ◽  
Jacob H Barfield ◽  
Woodrow L Shew

Naturally occurring body movements and collective neural activity both exhibit complex dynamics, often with scale-free, fractal spatiotemporal structure, thought to confer functional benefits to the organism. Despite their similarities, scale-free brain activity and scale-free behavior have been studied separately, without a unified explanation. Here we show that scale-free dynamics of behavior and certain subsets of cortical neurons are one-to-one related. Surprisingly, the scale-free neural subsets exhibit stochastic winner-take-all competition with other neural subsets, inconsistent with prevailing theory of scale-free neural systems. We develop a computational model which accounts for known cell-type-specific circuit structure and explains our findings. Our results establish neural underpinnings of scale-free behavior and clear behavioral relevance of scale-free neural activity, which was previously thought to represent background noise in cerebral cortex.

2011 ◽  
Vol 23 (7) ◽  
pp. 1697-1709 ◽  
Author(s):  
Todd M. Gureckis ◽  
Thomas W. James ◽  
Robert M. Nosofsky

Recent fMRI studies have found that distinct neural systems may mediate perceptual category learning under implicit and explicit learning conditions. In these previous studies, however, different stimulus-encoding processes may have been associated with implicit versus explicit learning. The present design was aimed at decoupling the influence of these factors on the recruitment of alternate neural systems. Consistent with previous reports, following incidental learning in a dot-pattern classification task, participants showed decreased neural activity in occipital visual cortex (extrastriate region V3, BA 19) in response to novel exemplars of a studied category compared to members of a foil category, but did not show this decreased neural activity following explicit learning. Crucially, however, our results show that this pattern was primarily modulated by aspects of the stimulus-encoding instructions provided at the time of study. In particular, when participants in an implicit learning condition were encouraged to evaluate the overall shape and configuration of the stimuli during study, we failed to find the pattern of brain activity that has been taken to be a signature of implicit learning, suggesting that activity in this area does not uniquely reflect implicit memory for perceptual categories but instead may reflect aspects of processing or perceptual encoding strategies.


Author(s):  
Shihui Han

Chapter 3 presents a theoretical framework for understanding the relationship between sociocultural experience and cognition, and for explanation of the differences in cognition and behavior between East Asian and Western cultures. It further reviews cultural neuroscience findings that uncover common and distinct neural underpinnings of cognitive processes in individuals from Western and East Asian cultures. Cross-cultural brain imaging findings have shown evidence for differences in brain activity between East Asian and Western cultures involved in perception, attention, memory, causality judgment, mathematical operation, semantic relationship, and decision making. The cultural neuroscience findings reveal neural bases for cultural preferences of context-independent or context-dependent strategies of cognition in multiple neural systems.


2007 ◽  
Vol 19 (11) ◽  
pp. 1776-1789 ◽  
Author(s):  
Leun J. Otten ◽  
Josefin Sveen ◽  
Angela H. Quayle

Research into the neural underpinnings of memory formation has focused on the encoding of familiar verbal information. Here, we address how the brain supports the encoding of novel information that does not have meaning. Electrical brain activity was recorded from the scalps of healthy young adults while they performed an incidental encoding task (syllable judgments) on separate series of words and “nonwords” (nonsense letter strings that are orthographically legal and pronounceable). Memory for the items was then probed with a recognition memory test. For words as well as nonwords, event-related potentials differed depending on whether an item would subsequently be remembered or forgotten. However, the polarity and timing of the effect varied across item type. For words, subsequently remembered items showed the usually observed positive-going, frontally distributed modulation from around 600 msec after word onset. For nonwords, by contrast, a negative-going, spatially widespread modulation predicted encoding success from 1000 msec onward. Nonwords also showed a modulation shortly after item onset. These findings imply that the brain supports the encoding of familiar and unfamiliar letter strings in qualitatively different ways, including the engagement of distinct neural activity at different points in time. The processing of semantic attributes plays an important role in the encoding of words and the associated positive frontal modulation.


2020 ◽  
Vol 117 (36) ◽  
pp. 22494-22505
Author(s):  
David J. Heeger ◽  
Klavdia O. Zemlianova

The normalization model has been applied to explain neural activity in diverse neural systems including primary visual cortex (V1). The model’s defining characteristic is that the response of each neuron is divided by a factor that includes a weighted sum of activity of a pool of neurons. Despite the success of the normalization model, there are three unresolved issues. 1) Experimental evidence supports the hypothesis that normalization in V1 operates via recurrent amplification, i.e., amplifying weak inputs more than strong inputs. It is unknown how normalization arises from recurrent amplification. 2) Experiments have demonstrated that normalization is weighted such that each weight specifies how one neuron contributes to another’s normalization pool. It is unknown how weighted normalization arises from a recurrent circuit. 3) Neural activity in V1 exhibits complex dynamics, including gamma oscillations, linked to normalization. It is unknown how these dynamics emerge from normalization. Here, a family of recurrent circuit models is reported, each of which comprises coupled neural integrators to implement normalization via recurrent amplification with arbitrary normalization weights, some of which can recapitulate key experimental observations of the dynamics of neural activity in V1.


Author(s):  
David J. Heeger ◽  
Klavdia O. Zemlianova

AbstractThe normalization model has been applied to explain neural activity in diverse neural systems including primary visual cortex (V1). The model’s defining characteristic is that the response of each neuron is divided by a factor that includes a weighted sum of activity of a pool of neurons. In spite of the success of the normalization model, there are 3 unresolved issues. 1) Experimental evidence supports the hypothesis that normalization in V1 operates via recurrent amplification, i.e., amplifying weak inputs more than strong inputs. It is unknown how nor-malization arises from recurrent amplification. 2) Experiments have demonstrated that normalization is weighted such that each weight specifies how one neuron contributes to another’s normalization pool. It is unknown how weighted normalization arises from a recurrent circuit. 3) Neural activity in V1 exhibits complex dynamics, including gamma oscillations, linked to normalization. It is unknown how these dynamics emerge from normalization. Here, a new family of recurrent circuit models is reported, each of which comprises coupled neural integrators to implement normalization via recurrent amplification with arbitrary normalization weights, some of which can reca-pitulate key experimental observations of the dynamics of neural activity in V1.Significance StatementA family of recurrent circuit models is proposed to explain the dynamics of neural activity in primary visual cortex (V1). Each of the models in this family exhibits steady state output responses that are already known to fit a wide range of experimental data from diverse neural systems. These models can recapitulate the complex dynamics of V1 activity, including oscillations (so-called gamma oscillations, ∼30-80 Hz). This theoretical framework may also be used to explain key aspects of working memory and motor control. Consequently, the same circuit architecture is applicable to a variety of neural systems, and V1 can be used as a model system for understanding the neural computations in many brain areas.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Soren Wainio-Theberge ◽  
Annemarie Wolff ◽  
Georg Northoff

AbstractSpontaneous neural activity fluctuations have been shown to influence trial-by-trial variation in perceptual, cognitive, and behavioral outcomes. However, the complex electrophysiological mechanisms by which these fluctuations shape stimulus-evoked neural activity remain largely to be explored. Employing a large-scale magnetoencephalographic dataset and an electroencephalographic replication dataset, we investigate the relationship between spontaneous and evoked neural activity across a range of electrophysiological variables. We observe that for high-frequency activity, high pre-stimulus amplitudes lead to greater evoked desynchronization, while for low frequencies, high pre-stimulus amplitudes induce larger degrees of event-related synchronization. We further decompose electrophysiological power into oscillatory and scale-free components, demonstrating different patterns of spontaneous-evoked correlation for each component. Finally, we find correlations between spontaneous and evoked time-domain electrophysiological signals. Overall, we demonstrate that the dynamics of multiple electrophysiological variables exhibit distinct relationships between their spontaneous and evoked activity, a result which carries implications for experimental design and analysis in non-invasive electrophysiology.


2004 ◽  
Vol 16 (9) ◽  
pp. 1669-1679 ◽  
Author(s):  
Emily D. Grossman ◽  
Randolph Blake ◽  
Chai-Youn Kim

Individuals improve with practice on a variety of perceptual tasks, presumably reflecting plasticity in underlying neural mechanisms. We trained observers to discriminate biological motion from scrambled (nonbiological) motion and examined whether the resulting improvement in perceptual performance was accompanied by changes in activation within the posterior superior temporal sulcus and the fusiform “face area,” brain areas involved in perception of biological events. With daily practice, initially naive observers became more proficient at discriminating biological from scrambled animations embedded in an array of dynamic “noise” dots, with the extent of improvement varying among observers. Learning generalized to animations never seen before, indicating that observers had not simply memorized specific exemplars. In the same observers, neural activity prior to and following training was measured using functional magnetic resonance imaging. Neural activity within the posterior superior temporal sulcus and the fusiform “face area” reflected the participants' learning: BOLD signals were significantly larger after training in response both to animations experienced during training and to novel animations. The degree of learning was positively correlated with the amplitude changes in BOLD signals.


2017 ◽  
Vol 24 (3) ◽  
pp. 277-293 ◽  
Author(s):  
Selen Atasoy ◽  
Gustavo Deco ◽  
Morten L. Kringelbach ◽  
Joel Pearson

A fundamental characteristic of spontaneous brain activity is coherent oscillations covering a wide range of frequencies. Interestingly, these temporal oscillations are highly correlated among spatially distributed cortical areas forming structured correlation patterns known as the resting state networks, although the brain is never truly at “rest.” Here, we introduce the concept of harmonic brain modes—fundamental building blocks of complex spatiotemporal patterns of neural activity. We define these elementary harmonic brain modes as harmonic modes of structural connectivity; that is, connectome harmonics, yielding fully synchronous neural activity patterns with different frequency oscillations emerging on and constrained by the particular structure of the brain. Hence, this particular definition implicitly links the hitherto poorly understood dimensions of space and time in brain dynamics and its underlying anatomy. Further we show how harmonic brain modes can explain the relationship between neurophysiological, temporal, and network-level changes in the brain across different mental states ( wakefulness, sleep, anesthesia, psychedelic). Notably, when decoded as activation of connectome harmonics, spatial and temporal characteristics of neural activity naturally emerge from the interplay between excitation and inhibition and this critical relation fits the spatial, temporal, and neurophysiological changes associated with different mental states. Thus, the introduced framework of harmonic brain modes not only establishes a relation between the spatial structure of correlation patterns and temporal oscillations (linking space and time in brain dynamics), but also enables a new dimension of tools for understanding fundamental principles underlying brain dynamics in different states of consciousness.


2019 ◽  
Author(s):  
Lin Wang ◽  
Edward Wlotko ◽  
Edward Alexander ◽  
Lotte Schoot ◽  
Minjae Kim ◽  
...  

AbstractIt has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used Magnetoencephalography (MEG) and Electroencephalography (EEG), in combination with Representational Similarity Analysis (RSA), to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate constraining verbs was greater than following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.Significance statementLanguage inputs unfold very quickly during real-time communication. By predicting ahead we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.


Sign in / Sign up

Export Citation Format

Share Document