scholarly journals Neurophysiological network dynamics of pitch change detection

2020 ◽  
Author(s):  
Soheila Samiee ◽  
Dominique Vuvan ◽  
Esther Florin ◽  
Philippe Albouy ◽  
Isabelle Peretz ◽  
...  

AbstractThe detection of pitch changes is crucial to sound localization, music appreciation and speech comprehension, yet the brain network oscillatory dynamics involved remain unclear. We used time-resolved cortical imaging in a pitch change detection task. Tone sequences were presented to both typical listeners and participants affected with congenital amusia, as a model of altered pitch change perception.Our data show that tone sequences entrained slow (2-4 Hz) oscillations in the auditory cortex and inferior frontal gyrus, at the pace of tone presentations. Inter-regional signaling at this slow pace was directed from auditory cortex towards the inferior frontal gyrus and motor cortex. Bursts of faster (15-35Hz) oscillations were also generated in these regions, with directed influence from the motor cortex. These faster components occurred precisely at the expected latencies of each tone in a sequence, yielding a form of local phase-amplitude coupling with slower concurrent activity. The intensity of this coupling peaked dynamically at the moment of anticipated pitch changes.We clarify the mechanistic relevance of these observations in relation to behavior as, by task design, typical listeners outperformed amusic participants. Compared to typical listeners, inter-regional slow signaling toward motor and inferior frontal cortices was depressed in amusia. Also, the auditory cortex of amusic participants over-expressed tonic, fast-slow phase-amplitude coupling, pointing at a possible misalignment between stimulus encoding and internal predictive signaling. Our study provides novel insight into the functional architecture of polyrhythmic brain activity in auditory perception and emphasizes active, network processes involving the motor system in sensory integration.

2020 ◽  
Vol 11 ◽  
Author(s):  
Wanghuan Dun ◽  
Tongtong Fan ◽  
Qiming Wang ◽  
Ke Wang ◽  
Jing Yang ◽  
...  

Empathy refers to the ability to understand someone else's emotions and fluctuates with the current state in healthy individuals. However, little is known about the neural network of empathy in clinical populations at different pain states. The current study aimed to examine the effects of long-term pain on empathy-related networks and whether empathy varied at different pain states by studying primary dysmenorrhea (PDM) patients. Multivariate partial least squares was employed in 46 PDM women and 46 healthy controls (HC) during periovulatory, luteal, and menstruation phases. We identified neural networks associated with different aspects of empathy in both groups. Part of the obtained empathy-related network in PDM exhibited a similar activity compared with HC, including the right anterior insula and other regions, whereas others have an opposite activity in PDM, including the inferior frontal gyrus and right inferior parietal lobule. These results indicated an abnormal regulation to empathy in PDM. Furthermore, there was no difference in empathy association patterns in PDM between the pain and pain-free states. This study suggested that long-term pain experience may lead to an abnormal function of the brain network for empathy processing that did not vary with the pain or pain-free state across the menstrual cycle.


2017 ◽  
Author(s):  
Christian Brodbeck ◽  
Alessandro Presacco ◽  
Jonathan Z. Simon

AbstractHuman experience often involves continuous sensory information that unfolds over time. This is true in particular for speech comprehension, where continuous acoustic signals are processed over seconds or even minutes. We show that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data by combining linear kernel estimation with minimum norm source localization. Previous research has shown that the requirement to average data over many trials can be overcome by modeling the brain response as a linear convolution of the stimulus and a kernel, or response function, and estimating a kernel that predicts the response from the stimulus. However, such analysis has been typically restricted to sensor space. Here we demonstrate that this analysis can also be performed in neural source space. We first computed distributed minimum norm current source estimates for continuous MEG recordings, and then computed response functions for the current estimate at each source element, using the boosting algorithm with cross-validation. Permutation tests can then assess the significance of individual predictor variables as well as features of the corresponding spatio-temporal response functions. We demonstrate the viability of this technique by computing spatio-temporal response functions for speech stimuli, using predictor variables reflecting acoustic, lexical and semantic processing. Results indicate that processes related to comprehension of continuous speech can be differentiated anatomically as well as temporally: acoustic information engaged auditory cortex at short latencies, followed by responses over the central sulcus and inferior frontal gyrus, possibly related to somatosensory/motor cortex involvement in speech perception; lexical frequency was associated with a left-lateralized response in auditory cortex and subsequent bilateral frontal activity; and semantic composition was associated with bilateral temporal and frontal brain activity. We conclude that this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG. This suggests new avenues for analyzing neural processing of naturalistic stimuli, without the necessity of averaging over artificially short or truncated stimuli.


1993 ◽  
Vol 102 (10) ◽  
pp. 797-801 ◽  
Author(s):  
Juichi Ito ◽  
Yasushi Iwasaki ◽  
Junji Sakakibara ◽  
Yoshiharu Yonekura

The present study investigated the function of the auditory cortices in severely hearing-impaired or deaf patients and cochlear implant patients before and after auditory stimulation. Positron emission computed tomography (PET), which can detect brain activity by providing quantitative measurements of the metabolic rates of oxygen and glucose, was used. In patients with residual hearing, the activity of the auditory cortex measured by PET was almost normal. Among the totally deaf patients, the longer the duration of deafness, the lower the brain activity in the auditory cortex measured by PET. Patients who had been deaf for a long period showed remarkably reduced metabolic rates in the auditory cortices. However, following implantation of the cochlear device, the metabolic activity returned to nearnormal levels. These findings suggest that activation of the speech comprehension mechanism of the higher brain system can be initiated by sound signals from the implant devices.


2020 ◽  
Vol 32 (5) ◽  
pp. 877-888
Author(s):  
Maxime Niesen ◽  
Marc Vander Ghinst ◽  
Mathieu Bourguignon ◽  
Vincent Wens ◽  
Julie Bertels ◽  
...  

Discrimination of words from nonspeech sounds is essential in communication. Still, how selective attention can influence this early step of speech processing remains elusive. To answer that question, brain activity was recorded with magnetoencephalography in 12 healthy adults while they listened to two sequences of auditory stimuli presented at 2.17 Hz, consisting of successions of one randomized word (tagging frequency = 0.54 Hz) and three acoustically matched nonverbal stimuli. Participants were instructed to focus their attention on the occurrence of a predefined word in the verbal attention condition and on a nonverbal stimulus in the nonverbal attention condition. Steady-state neuromagnetic responses were identified with spectral analysis at sensor and source levels. Significant sensor responses peaked at 0.54 and 2.17 Hz in both conditions. Sources at 0.54 Hz were reconstructed in supratemporal auditory cortex, left superior temporal gyrus (STG), left middle temporal gyrus, and left inferior frontal gyrus. Sources at 2.17 Hz were reconstructed in supratemporal auditory cortex and STG. Crucially, source strength in the left STG at 0.54 Hz was significantly higher in verbal attention than in nonverbal attention condition. This study demonstrates speech-sensitive responses at primary auditory and speech-related neocortical areas. Critically, it highlights that, during word discrimination, top–down attention modulates activity within the left STG. This area therefore appears to play a crucial role in selective verbal attentional processes for this early step of speech processing.


2020 ◽  
Author(s):  
Justin Riddle ◽  
Sangtae Ahn ◽  
Trevor McPherson ◽  
Susan Girdler ◽  
Flavio Frohlich

AbstractThe neuroactive metabolites of the steroid hormones progesterone (P4) and testosterone (T) are GABAergic modulators that influence cognitive control, yet the specific effect of P4 and T on brain network activity remains poorly understood. Here, we investigated if a fundamental oscillatory network activity pattern related to cognitive control, frontal midline theta (FMT) oscillations, are modulated by steroids hormones, P4 and T. We measured the concentration P4 and T using salivary enzyme immunoassay and FMT oscillations using high-density electroencephalography (EEG) during the eyes-open resting state in fifty-five healthy female and male participants. Electrical brain activity was analyzed using Morlet wavelet convolution, beamformer source localization, background noise spectral fitting, and phase amplitude coupling analysis. Steroid hormone concentrations and biological sex were used as predictors for scalp and source-estimated theta oscillations and for top-down theta-gamma phase amplitude coupling. Elevated concentrations of P4 predicted increased FMT oscillatory amplitude across both sexes, and no relationship was found with T. The positive correlation with P4 was specific to the frontal-midline electrodes and survived correction for the background noise of the brain. Using source localization, FMT oscillations were localized to the frontal-parietal network. Additionally, theta amplitude within the frontal-parietal network, but not the default mode network, positively correlated with P4 concentration. Finally, P4 concentration correlated with increased coupling between FMT phase and posterior gamma amplitude. Our results suggest that P4 concentration modulates brain activity via upregulation of theta oscillations in the frontal-parietal network and increased top-down control over posterior cortical sites.Significance StatementThe neuroactive metabolites of the steroid hormones progesterone (P4) and testosterone (T) are GABAergic modulators that influence cognitive control, yet the specific effect of P4 and T on brain network activity remains poorly understood. Here, we investigated if a fundamental oscillatory network activity pattern related to cognitive control, frontal midline theta (FMT) oscillations, are modulated by steroids hormones, P4 and T. Our results suggest that P4 concentration modulates brain activity via upregulation of theta oscillations in the frontal-parietal network and increased top-down control over posterior cortical sites.


2020 ◽  
Author(s):  
Danielle L. Kurtin ◽  
Ines R. Violante ◽  
Karl Zimmerman ◽  
Robert Leech ◽  
Adam Hampshire ◽  
...  

AbstractBackgroundTranscranial direct current stimulation (tDCS) is a form of noninvasive brain stimulation whose potential as a cognitive therapy is hindered by our limited understanding of how participant and experimental factors influence its effects. Using functional MRI to study brain networks, we have previously shown in healthy controls that the physiological effects of tDCS are strongly influenced by brain state. We have additionally shown, in both healthy and traumatic brain injury (TBI) populations, that the behavioral effects of tDCS are positively correlated with white matter (WM) structure.ObjectivesIn this study we investigate how these two factors, WM structure and brain state, interact to shape the effect of tDCS on brain network activity.MethodsWe applied anodal, cathodal and sham tDCS to the right inferior frontal gyrus (rIFG) of healthy (n=22) and TBI participants (n=34). We used the Choice Reaction Task (CRT) performance to manipulate brain state during tDCS. We acquired simultaneous fMRI to assess activity of cognitive brain networks and used Fractional Anisotropy (FA) as a measure of WM structure.ResultsWe find that the effects of tDCS on brain network activity in TBI participants are highly dependent on brain state, replicating findings from our previous healthy control study in a separate, patient cohort. We then show that WM structure further modulates the brain-state dependent effects of tDCS on brain network activity. These effects are not unidirectional – in the absence of task with anodal and cathodal tDCS, FA is positively correlated with brain activity in several regions of the default mode network. Conversely, with cathodal tDCS during CRT performance, FA is negatively correlated with brain activity in a salience network region.ConclusionsOur results show that experimental and participant factors interact to have unexpected effects on brain network activity, and that these effects are not fully predictable by studying the factors in isolation.


2019 ◽  
Author(s):  
J.M. Rimmele ◽  
Y. Sun ◽  
G. Michalareas ◽  
O. Ghitza ◽  
D. Poeppel

AbstractSpeech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level processing for speech segmentation. Most linguistic approaches, however, focus on mapping from acoustic-phonemic representations to the lexical level. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated. Here we investigate the effects of lexical processing and the interactions with (acoustic) syllable processing by examining MEG data recorded in two experiments using a frequency-tagging paradigm. Participants listened to disyllabic words presented at a rate of 4 syllables/sec. Two conjectures were evaluated: (i) lexical processing of words activates a network that interacts with syllable processing; and (ii) syllable transitions contribute to word-level processing. We show that lexical content activated a left-lateralized frontal and superior and middle temporal network and increased the interaction between left middle temporal areas and auditory cortex (phase-phase coupling). Mere syllable-transition information, in contrast, activated a bilateral superior-, middle temporal and inferior frontal network and increased the interaction between those areas. Word and syllable processing interacted in superior and middle temporal areas (cross-frequency coupling), whereas syllable tracking (cerebro-acoustic coherence) decreased when word-level information was present. The data provide a new perspective on speech comprehension by demonstrating a contribution of an acoustic-syllabic to lexical processing route.Significance statementThe comprehension of speech requires integrating information at multiple time scales, including phonemic, syllabic, and word scales. Typically, we think of decoding speech in the service of recognizing words as a process that maps from phonemic units to words. Recent neurophysiological evidence, however, has highlighted the relevance of syllable-sized chunks for segmenting speech. Is there more to recognizing spoken language? We provide neural evidence for brain network dynamics that support an interaction of lexical with syllable-level processing. We identify cortical networks that differ depending on whether lexical-semantic information versus low-level syllable-transition information is processed. Word- and syllable-level processing interact within MTG and STG. The data enrich our understanding of comprehension by implicating a mapping from syllabic to lexical representations.


2020 ◽  
Vol 1 (1) ◽  
pp. 21-28
Author(s):  
Brigitta Tóth ◽  
Ádám Boncz ◽  
Bálint File ◽  
István Winkler ◽  
Márk Molnár

Összefoglalás. A hálózatkutatás idegtudományi alkalmazása áttörő eredményt hozott a humán kogníció és a neurális rendszerek közötti kapcsolat megértésében. Jelen tanulmány célja a neurális hálózatok néhány kutatási területét mutatja be a laborunkban végzett vizsgálatok eredményein keresztül. Bemutatjuk az agyi aktivitás mérésének és az agyi területek közötti kommunikációs hálózatok modellezésének technikáját. Majd kiemelünk két kutatási terület: 1) az agyi hálózatok életkori változásainak vizsgálatát, ami választ ad arra, hogy hogyan öregszik az emberi agy; 2) az emberi agyak közötti hálózat modelljének vizsgálatát, amely a hatékony emberi kommunikáció idegrendszeri mechanizmusait próbálja feltárni. Tárgyaljuk a humán kommunikációra képes mesterséges intelligencia fejlesztésének lehetőségét is. Végül kitérünk az agyi hálózatok kutatásának biztonságpolitikai vonatkozásaira. Summary. The human brain consists of 100 billion neurons connected by about 100 trillion synapses, which are hierarchically organized in different scales in anatomical space and time. Thus, it sounds reasonable to assume that the brain is the most complex network known to man. Network science applications in neuroscience are aimed to understand how human feeling, thought and behavior could emerge from this biological system of the brain. The present review focuses on the recent results and the future of network neuroscience. The following topics will be discussed: Modeling the network of communication among brain areas. Neural activity can be recorded with high temporal precision using electroencephalography (EEG). Communication strength between brain regions then might be estimated by calculating mathematical synchronization indices between source localized EEG time series. Finally, graph theoretical models can describe the relationship between system elements (i.e. efficiency of communication or centrality of an element). How does the brain age? While for a newborn the high plasticity of the brain provides the foundation of cognitive development, cognition declines with advanced age due to so far largely unknown neural mechanisms. In one of our studies, we demonstrated that there is a correlation between the anatomical development of the brain (at prenatal age) and its network topology. Specifically, the more developed the baby’s brain, the more functionally specialized/modular it was. In another study we found that in older adults, when compared to young adults, connectivity within modules of their brain network is decreased, with an associated decline in their short-term memory capacity. Moreover, Mild Cognitive Impairment patients (early stage of Alzheimer) were characterized with a significantly lower level of connectivity between their brain modules than the healthy elderly. Human communication via shared network of brain activity. In another study we recorded the brain activity of a speaker and multiple listeners. We investigated the brain network similarity across listeners and between the speaker and listeners. We found that brain activity was significantly correlated among listeners, providing evidence for the fact that the same content is processed via similar neural computations within different brains. The data also suggested that the more the brain activity synchronizes the more the mental state of the individuals overlap. We also found significantly synchronized brain activity between speaker and listeners. Specifically 1) listeners’ brain activity within the speech processing cortices was synchronized to speaker’s brain activity with a time lag, indicating that listeners’ speech comprehension processes replicated the speaker’s speech production processes; and 2) listeners’ frontal cortical activity was synchronized to speaker’s later brain activity, that is, listeners preceded the speaker, indicating that speech content is predicted by the listeners based on the context. Future challenges. Future research could target artificial intelligence development that is capable of human-like communication. To achieve this, the simultaneous recording of brain activity from listener and speaker is needed together with efficiency of the communication. These data could be then modelled via AI to detect biomarkers of communication efficiency. In general, neurotechnology has been rapidly developing within and outside of research and in clinical fields thus it is time for re-conceptualizing the corresponding human right law in order to avoid unwanted consequences of technological applications.


Sign in / Sign up

Export Citation Format

Share Document