scholarly journals Microsleep is associated with brain activity patterns unperturbed by auditory inputs

2019 ◽  
Vol 122 (6) ◽  
pp. 2568-2575
Author(s):  
Zixin Yong ◽  
Joo Huang Tan ◽  
Po-Jang Hsieh

Microsleeps are brief episodes of arousal level decrease manifested through behavioral signs. Brain activity during microsleep in the presence of external stimulus remains poorly understood. In this study, we sought to understand neural responses to auditory stimulation during microsleep. We gave participants the simple task of listening to audios of different pitches and amplitude modulation frequencies during early afternoon functional MRI scans. We found the following: 1) microsleep was associated with cortical activations in broad motor and sensory regions and deactivations in thalamus, irrespective of auditory stimulation; 2) high and low pitch audios elicited different activity patterns in the auditory cortex during awake but not microsleep state; and 3) during microsleep, spatial activity patterns in broad brain regions were similar regardless of the presence or types of auditory stimulus (i.e., stimulus invariant). These findings show that the brain is highly active during microsleep but the activity patterns across broad regions are unperturbed by auditory inputs. NEW & NOTEWORTHY During deep drowsy states, auditory inputs could induce activations in the auditory cortex, but the activation patterns lose differentiation to high/low pitch stimuli. Instead of random activations, activity patterns across the brain during microsleep appear to be structured and may reflect underlying neurophysiological processes that remain unclear.

2017 ◽  
Author(s):  
Jeremy I Skipper ◽  
Jason D Zevin

How is speech understood despite the lack of a deterministic relationship between the sounds reaching auditory cortex and what we perceive? One possibility is that unheard words that are unconsciously activated in association with listening context are used to constrain interpretation. We hypothesized that a mechanism for doing so involves reusing the ability of the brain to predict the sensory effects of speaking associated words. Predictions are then compared to signals arriving in auditory cortex, resulting in reduced processing demands when accurate. Indeed, we show that sensorimotor brain regions are more active prior to words predictable from listening context. This activity resembles lexical and speech production related processes and, specifically, subsequent but still unpresented words. When those words occur, auditory cortex activity is reduced, through feedback connectivity. In less predictive contexts, activity patterns and connectivity for the same words are markedly different. Results suggest that the brain reorganizes to actively use knowledge about context to construct the speech we hear, enabling rapid and accurate comprehension despite acoustic variability.


2019 ◽  
Author(s):  
Amirouche Sadoun ◽  
Tushar Chauhan ◽  
Samir Mameri ◽  
Yifan Zhang ◽  
Pascal Barone ◽  
...  

AbstractModern neuroimaging represents three-dimensional brain activity, which varies across brain regions. It remains unknown whether activity within brain regions is organized in spatial configurations to reflect perceptual and cognitive processes. We developed a rotational cross-correlation method allowing a straightforward analysis of spatial activity patterns for the precise detection of the spatially correlated distributions of brain activity. Using several statistical approaches, we found that the seed patterns in the fusiform face area were robustly correlated to brain regions involved in face-specific representations. These regions differed from the non-specific visual network meaning that activity structure in the brain is locally preserved in stimulation-specific regions. Our findings indicate spatially correlated perceptual representations in cerebral activity and suggest that the 3D coding of the processed information is organized in locally preserved activity patterns. More generally, our results provide the first demonstration that information is represented and transmitted as local spatial configurations of brain activity.


2021 ◽  
Author(s):  
Anqi Wu ◽  
Samuel A. Nastase ◽  
Christopher A Baldassano ◽  
Nicholas B Turk-Browne ◽  
Kenneth A. Norman ◽  
...  

A key problem in functional magnetic resonance imaging (fMRI) is to estimate spatial activity patterns from noisy high-dimensional signals. Spatial smoothing provides one approach to regularizing such estimates. However, standard smoothing methods ignore the fact that correlations in neural activity may fall off at different rates in different brain areas, or exhibit discontinuities across anatomical or functional boundaries. Moreover, such methods do not exploit the fact that widely separated brain regions may exhibit strong correlations due to bilateral symmetry or the network organization of brain regions. To capture this non-stationary spatial correlation structure, we introduce the brain kernel, a continuous covariance function for whole-brain activity patterns. We define the brain kernel in terms of a continuous nonlinear mapping from 3D brain coordinates to a latent embedding space, parametrized with a Gaussian process (GP). The brain kernel specifies the prior covariance between voxels as a function of the distance between their locations in embedding space. The GP mapping warps the brain nonlinearly so that highly correlated voxels are close together in latent space, and uncorrelated voxels are far apart. We estimate the brain kernel using resting-state fMRI data, and we develop an exact, scalable inference method based on block coordinate descent to overcome the challenges of high dimensionality (10-100K voxels). Finally, we illustrate the brain kernel's usefulness with applications to brain decoding and factor analysis with multiple task-based fMRI datasets.


2022 ◽  
pp. 1-16
Author(s):  
Jamal A. Williams ◽  
Elizabeth H. Margulis ◽  
Samuel A. Nastase ◽  
Janice Chen ◽  
Uri Hasson ◽  
...  

Abstract Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.


2018 ◽  
Author(s):  
Maria Tsantani ◽  
Nikolaus Kriegeskorte ◽  
Carolyn McGettigan ◽  
Lúcia Garrido

AbstractFace-selective and voice-selective brain regions have been shown to represent face-identity and voice-identity, respectively. Here we investigated whether there are modality-general person-identity representations in the brain that can be driven by either a face or a voice, and that invariantly represent naturalistically varying face and voice tokens of the same identity. According to two distinct models, such representations could exist either in multimodal brain regions (Campanella and Belin, 2007) or in face-selective brain regions via direct coupling between face- and voice-selective regions (von Kriegstein et al., 2005). To test the predictions of these two models, we used fMRI to measure brain activity patterns elicited by the faces and voices of familiar people in multimodal, face-selective and voice-selective brain regions. We used representational similarity analysis (RSA) to compare the representational geometries of face- and voice-elicited person-identities, and to investigate the degree to which pattern discriminants for pairs of identities generalise from one modality to the other. We found no matching geometries for faces and voices in any brain regions. However, we showed crossmodal generalisation of the pattern discriminants in the multimodal right posterior superior temporal sulcus (rpSTS), suggesting a modality-general person-identity representation in this region. Importantly, the rpSTS showed invariant representations of face- and voice-identities, in that discriminants were trained and tested on independent face videos (different viewpoint, lighting, background) and voice recordings (different vocalizations). Our findings support the Multimodal Processing Model, which proposes that face and voice information is integrated in multimodal brain regions.Significance statementIt is possible to identify a familiar person either by looking at their face or by listening to their voice. Using fMRI and representational similarity analysis (RSA) we show that the right posterior superior sulcus (rpSTS), a multimodal brain region that responds to both faces and voices, contains representations that can distinguish between familiar people independently of whether we are looking at their face or listening to their voice. Crucially, these representations generalised across different particular face videos and voice recordings. Our findings suggest that identity information from visual and auditory processing systems is combined and integrated in the multimodal rpSTS region.


2019 ◽  
Author(s):  
Alexandra Woolgar ◽  
Nadene Dermody ◽  
Soheil Afshar ◽  
Mark A. Williams ◽  
Anina N. Rich

SummaryGreat excitement has surrounded our ability to decode task information from human brain activity patterns, reinforcing the dominant view of the brain as an information processor. We tested a fundamental but overlooked assumption: that such decodable information is actually used by the brain to generate cognition and behaviour. Participants performed a challenging stimulus-response task during fMRI. Our novel analyses trained a pattern classifier on data from correct trials, and used it to examine stimulus and rule coding on error trials. There was a striking interaction in which frontoparietal cortex systematically represented incorrect rule but correct stimulus information when participants used the wrong rule, and incorrect stimulus but correct rule information on other types of errors. Visual cortex, by contrast, did not code correct or incorrect information on error. Thus behaviour was tightly linked to coding in frontoparietal cortex and only weakly linked to coding in visual cortex. Human behaviour may indeed result from information-like patterns of activity in the brain, but this relationship is stronger in some brain regions than in others. Testing for information coding on error can help establish which patterns constitute behaviourally-meaningful information.


2017 ◽  
Vol 24 (3) ◽  
pp. 277-293 ◽  
Author(s):  
Selen Atasoy ◽  
Gustavo Deco ◽  
Morten L. Kringelbach ◽  
Joel Pearson

A fundamental characteristic of spontaneous brain activity is coherent oscillations covering a wide range of frequencies. Interestingly, these temporal oscillations are highly correlated among spatially distributed cortical areas forming structured correlation patterns known as the resting state networks, although the brain is never truly at “rest.” Here, we introduce the concept of harmonic brain modes—fundamental building blocks of complex spatiotemporal patterns of neural activity. We define these elementary harmonic brain modes as harmonic modes of structural connectivity; that is, connectome harmonics, yielding fully synchronous neural activity patterns with different frequency oscillations emerging on and constrained by the particular structure of the brain. Hence, this particular definition implicitly links the hitherto poorly understood dimensions of space and time in brain dynamics and its underlying anatomy. Further we show how harmonic brain modes can explain the relationship between neurophysiological, temporal, and network-level changes in the brain across different mental states ( wakefulness, sleep, anesthesia, psychedelic). Notably, when decoded as activation of connectome harmonics, spatial and temporal characteristics of neural activity naturally emerge from the interplay between excitation and inhibition and this critical relation fits the spatial, temporal, and neurophysiological changes associated with different mental states. Thus, the introduced framework of harmonic brain modes not only establishes a relation between the spatial structure of correlation patterns and temporal oscillations (linking space and time in brain dynamics), but also enables a new dimension of tools for understanding fundamental principles underlying brain dynamics in different states of consciousness.


2021 ◽  
Author(s):  
Takashi Nakano ◽  
Masahiro Takamura ◽  
Haruki Nishimura ◽  
Maro Machizawa ◽  
Naho Ichikawa ◽  
...  

AbstractNeurofeedback (NF) aptitude, which refers to an individual’s ability to change its brain activity through NF training, has been reported to vary significantly from person to person. The prediction of individual NF aptitudes is critical in clinical NF applications. In the present study, we extracted the resting-state functional brain connectivity (FC) markers of NF aptitude independent of NF-targeting brain regions. We combined the data in fMRI-NF studies targeting four different brain regions at two independent sites (obtained from 59 healthy adults and six patients with major depressive disorder) to collect the resting-state fMRI data associated with aptitude scores in subsequent fMRI-NF training. We then trained the regression models to predict the individual NF aptitude scores from the resting-state fMRI data using a discovery dataset from one site and identified six resting-state FCs that predicted NF aptitude. Next we validated the prediction model using independent test data from another site. The result showed that the posterior cingulate cortex was the functional hub among the brain regions and formed predictive resting-state FCs, suggesting NF aptitude may be involved in the attentional mode-orientation modulation system’s characteristics in task-free resting-state brain activity.


2020 ◽  
Vol 10 (12) ◽  
pp. 936
Author(s):  
Yujia Wu ◽  
Jingwen Ma ◽  
Lei Cai ◽  
Zengjian Wang ◽  
Miao Fan ◽  
...  

It is unclear whether the brain activity during phonological processing of second languages (L2) is similar to that of the first language (L1) in trilingual individuals, especially when the L1 is logographic, and the L2s are logographic and alphabetic, respectively. To explore this issue, this study examined brain activity during visual and auditory word rhyming tasks in Cantonese–Mandarin–English trilinguals. Thirty Chinese college students whose L1 was Cantonese and L2s were Mandarin and English were recruited. Functional magnetic resonance imaging (fMRI) was conducted while subjects performed visual and auditory word rhyming tasks in three languages (Cantonese, Mandarin, and English). The results revealed that in Cantonese–Mandarin–English trilinguals, whose L1 is logographic and the orthography of their L2 is the same as L1—i.e., Mandarin and Cantonese, which share the same set of Chinese characters—the brain regions for the phonological processing of L2 are different from those of L1; when the orthography of L2 is quite different from L1, i.e., English and Cantonese who belong to different writing systems, the brain regions for the phonological processing of L2 are similar to those of L1. A significant interaction effect was observed between language and modality in bilateral lingual gyri. Regions of interest (ROI) analysis at lingual gyri revealed greater activation of this region when using English than Cantonese and Mandarin in visual tasks.


Author(s):  
Ole Adrian Heggli ◽  
Ivana Konvalinka ◽  
Joana Cabral ◽  
Elvira Brattico ◽  
Morten L Kringelbach ◽  
...  

Abstract Interpersonal coordination is a core part of human interaction, and its underlying mechanisms have been extensively studied using social paradigms such as joint finger-tapping. Here, individual and dyadic differences have been found to yield a range of dyadic synchronization strategies, such as mutual adaptation, leading–leading, and leading–following behaviour, but the brain mechanisms that underlie these strategies remain poorly understood. To identify individual brain mechanisms underlying emergence of these minimal social interaction strategies, we contrasted EEG-recorded brain activity in two groups of musicians exhibiting the mutual adaptation and leading–leading strategies. We found that the individuals coordinating via mutual adaptation exhibited a more frequent occurrence of phase-locked activity within a transient action–perception-related brain network in the alpha range, as compared to the leading–leading group. Furthermore, we identified parietal and temporal brain regions that changed significantly in the directionality of their within-network information flow. Our results suggest that the stronger weight on extrinsic coupling observed in computational models of mutual adaptation as compared to leading–leading might be facilitated by a higher degree of action–perception network coupling in the brain.


Sign in / Sign up

Export Citation Format

Share Document