scholarly journals All in thirty milliseconds: EEG evidence of hierarchical and asymmetric phonological encoding of vowels

2018 ◽  
Author(s):  
Anna Dora Manca ◽  
Francesco Di Russo ◽  
Francesco Sigona ◽  
Mirko Grimaldi

How the brain encodes the speech acoustic signal into phonological representations (distinctive features) is a fundamental question for the neurobiology of language. Whether this process is characterized by tonotopic maps in primary or secondary auditory areas, with bilateral or leftward activity, remains a long-standing challenge. Magnetoencephalographic and ECoG studies have previously failed to show hierarchical and asymmetric hints for speech processing. We employed high-density electroencephalography to map the Salento Italian vowel system onto cortical sources using the N1 auditory evoked component. We found evidence that the N1 is characterized by hierarchical and asymmetric indexes structuring vowels representation. We identified them with two N1 subcomponents: the typical N1 (N1a) peaking at 125-135 ms and localized in the primary auditory cortex bilaterally with a tangential distribution and a late phase of the N1 (N1b) peaking at 145-155 ms and localized in the left superior temporal gyrus with a radial distribution. Notably, we showed that the processing of distinctive feature representations begins early in the primary auditory cortex and carries on in the superior temporal gyrus along lateral-medial, anterior-posterior and inferior-superior gradients. It is the dynamical interface of both auditory cortices and the interaction effects between different distinctive features that generate the categorical representations of vowels.

2002 ◽  
Vol 88 (1) ◽  
pp. 540-543 ◽  
Author(s):  
John J. Foxe ◽  
Glenn R. Wylie ◽  
Antigona Martinez ◽  
Charles E. Schroeder ◽  
Daniel C. Javitt ◽  
...  

Using high-field (3 Tesla) functional magnetic resonance imaging (fMRI), we demonstrate that auditory and somatosensory inputs converge in a subregion of human auditory cortex along the superior temporal gyrus. Further, simultaneous stimulation in both sensory modalities resulted in activity exceeding that predicted by summing the responses to the unisensory inputs, thereby showing multisensory integration in this convergence region. Recently, intracranial recordings in macaque monkeys have shown similar auditory-somatosensory convergence in a subregion of auditory cortex directly caudomedial to primary auditory cortex (area CM). The multisensory region identified in the present investigation may be the human homologue of CM. Our finding of auditory-somatosensory convergence in early auditory cortices contributes to mounting evidence for multisensory integration early in the cortical processing hierarchy, in brain regions that were previously assumed to be unisensory.


2021 ◽  
Vol 15 ◽  
Author(s):  
Agnès Trébuchon ◽  
F.-Xavier Alario ◽  
Catherine Liégeois-Chauvel

The posterior part of the superior temporal gyrus (STG) has long been known to be a crucial hub for auditory and language processing, at the crossroad of the functionally defined ventral and dorsal pathways. Anatomical studies have shown that this “auditory cortex” is composed of several cytoarchitectonic areas whose limits do not consistently match macro-anatomic landmarks like gyral and sulcal borders. The only method to record and accurately distinguish neuronal activity from the different auditory sub-fields of primary auditory cortex, located in the tip of Heschl and deeply buried in the Sylvian fissure, is to use stereotaxically implanted depth electrodes (Stereo-EEG) for pre-surgical evaluation of patients with epilepsy. In this prospective, we focused on how anatomo-functional delineation in Heschl’s gyrus (HG), Planum Temporale (PT), the posterior part of the STG anterior to HG, the posterior superior temporal sulcus (STS), and the region at the parietal-temporal boundary commonly labeled “SPT” can be achieved using data from electrical cortical stimulation combined with electrophysiological recordings during listening to pure tones and syllables. We show the differences in functional roles between the primary and non-primary auditory areas, in the left and the right hemispheres. We discuss how these findings help understanding the auditory semiology of certain epileptic seizures and, more generally, the neural substrate of hemispheric specialization for language.


2008 ◽  
Vol 20 (3) ◽  
pp. 541-552 ◽  
Author(s):  
Eveline Geiser ◽  
Tino Zaehle ◽  
Lutz Jancke ◽  
Martin Meyer

The present study investigates the neural correlates of rhythm processing in speech perception. German pseudosentences spoken with an exaggerated (isochronous) or a conversational (nonisochronous) rhythm were compared in an auditory functional magnetic resonance imaging experiment. The subjects had to perform either a rhythm task (explicit rhythm processing) or a prosody task (implicit rhythm processing). The study revealed bilateral activation in the supplementary motor area (SMA), extending into the cingulate gyrus, and in the insulae, extending into the right basal ganglia (neostriatum), as well as activity in the right inferior frontal gyrus (IFG) related to the performance of the rhythm task. A direct contrast between isochronous and nonisochronous sentences revealed differences in lateralization of activation for isochronous processing as a function of the explicit and implicit tasks. Explicit processing revealed activation in the right posterior superior temporal gyrus (pSTG), the right supramarginal gyrus, and the right parietal operculum. Implicit processing showed activation in the left supramarginal gyrus, the left pSTG, and the left parietal operculum. The present results indicate a function of the SMA and the insula beyond motor timing and speak for a role of these brain areas in the perception of acoustically temporal intervals. Secondly, the data speak for a specific task-related function of the right IFG in the processing of accent patterns. Finally, the data sustain the assumption that the right secondary auditory cortex is involved in the explicit perception of auditory suprasegmental cues and, moreover, that activity in the right secondary auditory cortex can be modulated by top-down processing mechanisms.


2001 ◽  
Vol 86 (5) ◽  
pp. 2616-2620 ◽  
Author(s):  
Xiaoqin Wang ◽  
Siddhartha C. Kadia

A number of studies in various species have demonstrated that natural vocalizations generally produce stronger neural responses than do their time-reversed versions. The majority of neurons in the primary auditory cortex (A1) of marmoset monkeys responds more strongly to natural marmoset vocalizations than to the time-reversed vocalizations. However, it was unclear whether such differences in neural responses were simply due to the difference between the acoustic structures of natural and time-reversed vocalizations or whether they also resulted from the difference in behavioral relevance of both types of the stimuli. To address this issue, we have compared neural responses to natural and time-reversed marmoset twitter calls in A1 of cats with those obtained from A1 of marmosets using identical stimuli. It was found that the preference for natural marmoset twitter calls demonstrated in marmoset A1 was absent in cat A1. While both cortices responded approximately equally to time-reversed twitter calls, marmoset A1 responded much more strongly to natural twitter calls than did cat A1. This differential representation of marmoset vocalizations in two cortices suggests that experience-dependent and possibly species-specific mechanisms are involved in cortical processing of communication sounds.


2020 ◽  
Vol 32 (5) ◽  
pp. 877-888
Author(s):  
Maxime Niesen ◽  
Marc Vander Ghinst ◽  
Mathieu Bourguignon ◽  
Vincent Wens ◽  
Julie Bertels ◽  
...  

Discrimination of words from nonspeech sounds is essential in communication. Still, how selective attention can influence this early step of speech processing remains elusive. To answer that question, brain activity was recorded with magnetoencephalography in 12 healthy adults while they listened to two sequences of auditory stimuli presented at 2.17 Hz, consisting of successions of one randomized word (tagging frequency = 0.54 Hz) and three acoustically matched nonverbal stimuli. Participants were instructed to focus their attention on the occurrence of a predefined word in the verbal attention condition and on a nonverbal stimulus in the nonverbal attention condition. Steady-state neuromagnetic responses were identified with spectral analysis at sensor and source levels. Significant sensor responses peaked at 0.54 and 2.17 Hz in both conditions. Sources at 0.54 Hz were reconstructed in supratemporal auditory cortex, left superior temporal gyrus (STG), left middle temporal gyrus, and left inferior frontal gyrus. Sources at 2.17 Hz were reconstructed in supratemporal auditory cortex and STG. Crucially, source strength in the left STG at 0.54 Hz was significantly higher in verbal attention than in nonverbal attention condition. This study demonstrates speech-sensitive responses at primary auditory and speech-related neocortical areas. Critically, it highlights that, during word discrimination, top–down attention modulates activity within the left STG. This area therefore appears to play a crucial role in selective verbal attentional processes for this early step of speech processing.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Esti Blanco-Elorrieta ◽  
Laura Gwilliams ◽  
Alec Marantz ◽  
Liina Pylkkänen

AbstractSpeech is a complex and ambiguous acoustic signal that varies significantly within and across speakers. Despite the processing challenge that such variability poses, humans adapt to systematic variations in pronunciation rapidly. The goal of this study is to uncover the neurobiological bases of the attunement process that enables such fluent comprehension. Twenty-four native English participants listened to words spoken by a “canonical” American speaker and two non-canonical speakers, and performed a word-picture matching task, while magnetoencephalography was recorded. Non-canonical speech was created by including systematic phonological substitutions within the word (e.g. [s] → [sh]). Activity in the auditory cortex (superior temporal gyrus) was greater in response to substituted phonemes, and, critically, this was not attenuated by exposure. By contrast, prefrontal regions showed an interaction between the presence of a substitution and the amount of exposure: activity decreased for canonical speech over time, whereas responses to non-canonical speech remained consistently elevated. Grainger causality analyses further revealed that prefrontal responses serve to modulate activity in auditory regions, suggesting the recruitment of top-down processing to decode non-canonical pronunciations. In sum, our results suggest that the behavioural deficit in processing mispronounced phonemes may be due to a disruption to the typical exchange of information between the prefrontal and auditory cortices as observed for canonical speech.


2020 ◽  
Author(s):  
L Feigin ◽  
G Tasaka ◽  
I Maor ◽  
A Mizrahi

AbstractThe mouse auditory cortex is comprised of several auditory fields spanning the dorso-ventral axis of the temporal lobe. The ventral most auditory field is the temporal association cortex (TeA), which remains largely unstudied. Using Neuropixels probes, we simultaneously recorded from primary auditory cortex (AUDp), secondary auditory cortex (AUDv) and TeA, characterizing neuronal responses to pure tones and frequency modulated (FM) sweeps in awake head-restrained mice. As compared to primary and secondary auditory cortices, single unit responses to pure tones in TeA were sparser, delayed and prolonged. Responses to FMs were also sparser. Population analysis showed that the sparser responses in TeA render it less sensitive to pure tones, yet more sensitive to FMs. When characterizing responses to pure tones under anesthesia, the distinct signature of TeA was changed considerably as compared to that in awake mice, implying that responses in TeA are strongly modulated by non-feedforward connections. Together with the known connectivity profile of TeA, these findings suggest that sparse representation of sounds in TeA supports selectivity to higher-order features of sounds and more complex auditory computations.


Author(s):  
Liberty S. Hamilton ◽  
Yulia Oganian ◽  
Edward F. Chang

AbstractSpeech perception involves the extraction of acoustic and phonological features from the speech signal. How those features map out across the human auditory cortex is unknown. Complementary to noninvasive imaging, the high spatial and temporal resolution of intracranial recordings has greatly contributed to recent advances in our understanding. However, these approaches are typically limited by piecemeal sampling of the expansive human temporal lobe auditory cortex. Here, we present a functional characterization of local cortical encoding throughout all major regions of the primary and non-primary human auditory cortex. We overcame previous limitations by using rare direct recordings from the surface of the temporal plane after surgical microdissection of the deep Sylvian fissure between the frontal and temporal lobes. We recorded neural responses using simultaneous high-density direct recordings over the left temporal plane and the lateral superior temporal gyrus, while participants listened to natural speech sentences and pure tone stimuli. We found an anatomical separation of simple spectral feature tuning, including tuning for pure tones and absolute pitch, on the superior surface of the temporal plane, and complex tuning for phonological features, relative pitch and speech amplitude modulations on lateral STG. Broadband onset responses are unique to posterior STG and not found elsewhere in auditory cortices. This onset region is functionally distinct from the rest of STG, with latencies similar to primary auditory areas. These findings reveal a new, detailed functional organization of response selectivity to acoustic and phonological features in speech throughout the human auditory cortex.


2017 ◽  
Author(s):  
Mirko Grimaldi

AbstractIn this work, I address the connection of phonetic structure with phonological representations. This classical issue is discussed in the light of recent neurophysiological data which – thanks to direct measurements of temporal and spatial brain activation – provide new avenues to investigate the biological substrate of human language. After describing principal techniques and methods, I critically discuss magnetoencephalographic and electroencephalographic findings of speech processing based on event-related potentials and event-related oscillatory rhythms. The available data do not permit us to clearly disambiguate between neural evidence suggesting pure acoustic patterns and those indicating abstract phonological features. Starting from this evidence, which only at the surface represents a limit, I develop a preliminary proposal where discretization and phonological abstraction are the result of a continuous process that converts spectro-temporal (acoustic) states into neurophysiological states such that some properties of the former undergo changes interacting with the latter until a new equilibrium is reached. I assume that – at the end of the process – phonological segments (and the related categorical processes) take the form of continuous neural states represented by nested cortical oscillatory rhythms spatially distributed in the auditory cortex. Within this perspective, distinctive features (i.e., the relevant representational linguistic primitives) are represented by both spatially local and distributed neural selectivity. I suggest that this hypothesis is suitable to explain hierarchical layout of auditory cortex highly specialized in analyzing different aspects of the speech signal and to explain learning and memory processes during the acquisition of phonological systems.


2011 ◽  
Vol 204-210 ◽  
pp. 5-10
Author(s):  
Qiang Li ◽  
Suang Xia ◽  
Fei Zhao

Using functional magnetic resonance imaging (fMRI), to observe the changes of cerebral functional cortex in prelingual deaf singers for Chinese sign language(CSL). Results:During observing and imitating CSL, the activated areas in all groups include bilateral middle frontal gyrus, middle temporal gyrus, superior parietal lobule, cuneate lobe, fusiform gyrus and lingual gurus. The activation of bilateral inferior frontal gyrus were found in groupⅠ, Ⅲ and Ⅳ, but no activation in groupⅡ. The activation of bilateral superior temporal gyrus and inferior parietal lobule were found in groupⅠand Ⅲ, but no activation in others. The volumes of bilateral inferior frontal gyrus in groupⅠwere greater than those in group Ⅲ and Ⅳ. The volumes of bilateral superior temporal gyrus in groupⅠwere greater than those in group Ⅲ. Conclusion:The cortex in PDS had occurred reorganization, after losing their auditory and learning the CSL. The activation of linguistic cortex can be found during oberserving and imitating CSL in PDS. The secondary auditory cortex and association area turn to take part in processing visual language when no auditory afference, whereas the primary auditory cortex do not participate the reorganization. Additionally, the visual cortex of PDS is more sensitive than that of normal heaing individuals.


Sign in / Sign up

Export Citation Format

Share Document