scholarly journals Speech-related auditory salience detection in the posterior superior temporal region

2021 ◽  
Author(s):  
Erik C. Brown ◽  
Brittany Stedelin ◽  
Ahmed M. Raslan ◽  
Nathan R. Selden

AbstractProcessing auditory human speech requires both detection (early and transient) and analysis (sustained). We analyzed high gamma (70-110Hz) activity of intracranial electroencephalography waveforms acquired during an auditory task that paired forward speech, reverse speech, and signal correlated noise. We identified widespread superior temporal sites with sustained activity responding only to forward and reverse speech regardless of paired order. More localized superior temporal auditory onset sites responded to all stimulus types when presented first in a pair and responded in recurrent fashion to the second paired stimulus in select conditions even in the absence of interstimulus silence; a novel finding. Auditory onset activity to a second paired sound recurred according to relative salience, with evidence of partial suppression during linguistic processing. We propose that temporal lobe auditory onset sites facilitate a salience detector function with hysteresis of 200ms and are influenced by cortico-cortical feedback loops involving linguistic processing and articulation.

2021 ◽  
Author(s):  
Masaki Sonoda ◽  
Brian Silverstein ◽  
Jeong-won Jeong ◽  
Ayaka Sugiura ◽  
Yasuo Nakai ◽  
...  

Abstract During a verbal conversation, as individuals listen and respond, the human brain moves through a series of complex linguistic processing stages: decoding of speech sounds, semantic comprehension, retrieval of semantically coherent words, and finally, overt production of speech outputs. Each process is thought to be supported by a cortical network consisting of local and long-range connections bridging between major cortical areas. Both temporal and extratemporal lobe regions are suggested to have functional compartments responsible for distinct language domains, including the perception and production of phonological and semantic components. This study provides quantitative evidence of how directly connected, inter-lobar neocortical networks support distinct linguistic stages of linguistic processing across brain development. A novel six-dimensional tractography animation technique was used to intuitively visualize the strength and temporal dynamics of direct inter-lobar effective connectivity between cortical areas activated during distinct linguistic processing stages. This study analyzed 3,401 non-epileptic intracranial electrode sites from 37 children with focal epilepsy (age: 5–20 years) who underwent extraoperative electrocorticography recording. We used a principal component analysis of high-gamma modulations during an auditory naming task to determine the relative involvement of each cortical area during each linguistic processing stage. To quantify direct effective connectivity, we delivered single-pulse electrical stimulation to 488 temporal lobe sites and 1,581 extratemporal lobe sites and measured the early cortico-cortical spectral responses at distant electrodes. Mixed model analyses determined the effects of naming-related high-gamma co-augmentation between connecting regions, age, and cerebral hemisphere on the strength of effective connectivity independent of epilepsy-related factors. Direct effective connectivity was strongest between temporal and extratemporal lobe site pairs which were simultaneously activated during the period between sentence offset and verbal response onset (i.e., semantic retrieval period); this connectivity was approximately twice more robust than that with temporal lobe sites activated during stimulus listening or overt response. Conversely, extratemporal lobe sites activated during overt response were equally connected with temporal lobe language sites. Older age was associated with the increased strength of inter-lobar effective connectivity between those activated during semantic retrieval. The arcuate fasciculus supported approximately two-thirds of the direct effective connectivity pathways from temporal to extratemporal auditory language-related areas but only up to half of those in the opposite direction. The uncinate fasciculus consisted of only less than 2% of those in the temporal-to-extratemporal direction and up to 6% of those in the opposite direction. Our multimodal study quantified and animated the direct inter-lobar networks toward and from temporal lobe regions supporting distinct stages of auditory language processing. Additionally, age-dependent strengthening of connectivity after age five may preferentially occur between language areas supporting semantic retrieval.


Author(s):  
Daniel S Weisholtz ◽  
Gabriel Kreiman ◽  
David A Silbersweig ◽  
Emily Stern ◽  
Brannon Cha ◽  
...  

Abstract The ability to distinguish between negative, positive and neutral valence is a key part of emotion perception. Emotional valence has conceptual meaning that supersedes any particular type of stimulus, although it is typically captured experimentally in association with particular tasks. We sought to identify neural encoding for task-invariant emotional valence. We evaluated whether high gamma responses (HGRs) to visually displayed words conveying emotions could be used to decode emotional valence from HGRs to facial expressions. Intracranial electroencephalography (iEEG) was recorded from fourteen individuals while they participated in two tasks, one involving reading words with positive, negative, and neutral valence, and the other involving viewing faces with positive, negative, and neutral facial expressions. Quadratic discriminant analysis was used to identify information in the HGR that differentiates the three emotion conditions. A classifier was trained on the emotional valence labels from one task and was cross-validated on data from the same task (within-task classifier) as well as the other task (between-task classifier). Emotional valence could be decoded in the left medial orbitofrontal cortex and middle temporal gyrus, both using within-task classifiers as well as between-task classifiers. These observations suggest the presence of task-independent emotional valence information in the signals from these regions.


2018 ◽  
Author(s):  
Björn Herrmann ◽  
Ingrid S. Johnsrude

AbstractThe ability to detect regularities in sound (i.e., recurring structure) is critical for effective perception, enabling, for example, change detection and prediction. Two seemingly unconnected lines of research concern the neural operations involved in processing regularities: one investigates how neural activity synchronizes with temporal regularities (e.g., frequency modulation; FM) in sounds, whereas the other focuses on increases in sustained activity during stimulation with repeating tone-frequency patterns. In three electroencephalography studies with male and female human participants, we investigated whether neural synchronization and sustained neural activity are dissociable, or whether they are functionally interdependent. Experiment I demonstrated that neural activity synchronizes with temporal regularity (FM) in sounds, and that sustained activity increases concomitantly. In Experiment II, phase coherence of FM in sounds was parametrically varied. Although neural synchronization was more sensitive to changes in FM coherence, such changes led to a systematic modulation of both neural synchronization and sustained activity, with magnitude increasing as coherence increased. In Experiment III, participants either performed a duration categorization task on the sounds, or a visual object tracking task to distract attention. Neural synchronization was observed irrespective of task, whereas the sustained response was observed only when attention was on the auditory task, not under (visual) distraction. The results suggest that neural synchronization and sustained activity levels are functionally linked: both are sensitive to regularities in sounds. However, neural synchronization might reflect a more sensory-driven response to regularity, compared with sustained activity which may be influenced by attentional, contextual, or other experiential factors.Significance statementOptimal perception requires that the auditory system detects regularities in sounds. Synchronized neural activity and increases in sustained neural activity both appear to index the detection of a regularity, but the functional interrelation of these two neural signatures is unknown. In three electroencephalography experiments, we measured both signatures concomitantly while listeners were presented with sounds containing frequency modulations that differed in their regularity. We observed that both neural signatures are sensitive to temporal regularity in sounds, although they functionally decouple when a listener is distracted by a demanding visual task. Our data suggest that neural synchronization reflects a more automatic response to regularity, compared with sustained activity which may be influenced by attentional, contextual, or other experiential factors.


2018 ◽  
Author(s):  
Nikolas A. Francis ◽  
Diego Elgueda ◽  
Bernhard Englitz ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

AbstractRapid task-related plasticity is a neural correlate of selective attention in primary auditory cortex (A1). Top-down feedback from higher-order cortex may drive task-related plasticity in A1, characterized by enhanced neural representation of behaviorally meaningful sounds during auditory task performance. Since intracortical connectivity is greater within A1 layers 2/3 (L2/3) than in layers 4-6 (L4-6), we hypothesized that enhanced representation of behaviorally meaningful sounds might be greater in A1 L2/3 than L4-6. To test this hypothesis and study the laminar profile of task-related plasticity, we trained 2 ferrets to detect pure tones while we recorded laminar activity across a 1.8 mm depth in A1. In each experiment, we analyzed current-source densities (CSDs), high-gamma local field potentials (LFPs), and multi-unit spiking in response to identical acoustic stimuli during both passive listening and active task performance. We found that neural responses to auditory targets were enhanced during task performance, and target enhancement was greater in L2/3 than in L4-6. Spectrotemporal receptive fields (STRFs) computed from CSDs, high-gamma LFPs, and multi-unit spiking showed similar increases in auditory target selectivity, also greatest in L2/3. Our results suggest that activity within intracortical networks plays a key role in shaping the underlying neural mechanisms of selective attention.


2019 ◽  
Author(s):  
Fiorenzo Artoni ◽  
Piergiorgio d’Orio ◽  
Eleonora Catricalà ◽  
Francesca Conca ◽  
Franco Bottoni ◽  
...  

Syntax is traditionally defined as a specifically human way to pair sound with meaning: words are assembled in a recursive way generating a potentially infinite set of sentences1,2. There can be different phrasal structures depending on the types of words involved, for example, “noun phrases” (NP), combining an article and a noun, vs. “verb phrases” (VP), combining a verb and a complement. Although it is known that the combination of an increasing number of words in sequences correlates with an increasing electrophysiological activity3,4, the specific electrophysiological correlates of the syntactic operation generating NPs vs. VPs remain unknown. A major confounding factor is the fact that syntactic information is inevitably intertwined with the acoustic information contained in words even during inner speech5. Here, we addressed this issue in a novel way by designing a paradigm to factor out acoustic information and isolate the syntactic component. In particular, we construed phrases that have exactly the same acoustic content but that are interpreted as NPs or VPs depending on their syntactic context (homophonous phrases). By performing stereo-electro-encephalographic (SEEG) recordings in epileptic patients6 we show that VPs are associated with a higher activity in the high gamma band (150-300Hz frequency), an index of cortical activity associated with linguistic processing, with respect to NPs in multiple cortical areas in both hemispheres, including language areas and their homologous in the non-dominant hemisphere. Our findings pave the way to a deeper understanding of the electrophysiological mechanisms underlying syntax and contribute to the ultimate far reaching goal of a complete neural decoding of linguistic structures from the brain2.


2020 ◽  
Vol 32 (5) ◽  
pp. 762-782
Author(s):  
Orly Rubinsten ◽  
Nachshon Korem ◽  
Naama Levin ◽  
Tamar Furman

Recent evidence suggests that during numerical calculation, symbolic and nonsymbolic processing are functionally distinct operations. Nevertheless, both roughly recruit the same brain areas (spatially overlapping networks in the parietal cortex) and happen at the same time (roughly 250 msec poststimulus onset). We tested the hypothesis that symbolic and nonsymbolic processing are segregated by means of functionally relevant networks in different frequency ranges: high gamma (above 50 Hz) for symbolic processing and lower beta (12–17 Hz) for nonsymbolic processing. EEG signals were quantified as participants compared either symbolic numbers or nonsymbolic quantities. Larger EEG gamma-band power was observed for more difficult symbolic comparisons (ratio of 0.8 between the two numbers) than for easier comparisons (ratio of 0.2) over frontocentral regions. Similarly, beta-band power was larger for more difficult nonsymbolic comparisons than for easier ones over parietal areas. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during numerical processing that is compatible with the notion of distinct linguistic processing of symbolic numbers and approximation of nonsymbolic numerical information.


2014 ◽  
Vol 14 (3) ◽  
pp. 287-295 ◽  
Author(s):  
Milena Korostenskaja ◽  
Po-Ching Chen ◽  
Christine M. Salinas ◽  
Michael Westerveld ◽  
Peter Brunner ◽  
...  

Accurate language localization expands surgical treatment options for epilepsy patients and reduces the risk of postsurgery language deficits. Electrical cortical stimulation mapping (ESM) is considered to be the clinical gold standard for language localization. While ESM affords clinically valuable results, it can be poorly tolerated by children, requires active participation and compliance, carries a risk of inducing seizures, is highly time consuming, and is labor intensive. Given these limitations, alternative and/or complementary functional localization methods such as analysis of electrocorticographic (ECoG) activity in high gamma frequency band in real time are needed to precisely identify eloquent cortex in children. In this case report, the authors examined 1) the use of real-time functional mapping (RTFM) for language localization in a high gamma frequency band derived from ECoG to guide surgery in an epileptic pediatric patient and 2) the relationship of RTFM mapping results to postsurgical language outcomes. The authors found that RTFM demonstrated relatively high sensitivity (75%) and high specificity (90%) when compared with ESM in a “next-neighbor” analysis. While overlapping with ESM in the superior temporal region, RTFM showed a few other areas of activation related to expressive language function, areas that were eventually resected during the surgery. The authors speculate that this resection may be associated with observed postsurgical expressive language deficits. With additional validation in more subjects, this finding would suggest that surgical planning and associated assessment of the risk/benefit ratio would benefit from information provided by RTFM mapping.


Sign in / Sign up

Export Citation Format

Share Document