scholarly journals The Phonetics-Phonology Relationship in the Neurobiology of Language

2017 ◽  
Author(s):  
Mirko Grimaldi

AbstractIn this work, I address the connection of phonetic structure with phonological representations. This classical issue is discussed in the light of recent neurophysiological data which – thanks to direct measurements of temporal and spatial brain activation – provide new avenues to investigate the biological substrate of human language. After describing principal techniques and methods, I critically discuss magnetoencephalographic and electroencephalographic findings of speech processing based on event-related potentials and event-related oscillatory rhythms. The available data do not permit us to clearly disambiguate between neural evidence suggesting pure acoustic patterns and those indicating abstract phonological features. Starting from this evidence, which only at the surface represents a limit, I develop a preliminary proposal where discretization and phonological abstraction are the result of a continuous process that converts spectro-temporal (acoustic) states into neurophysiological states such that some properties of the former undergo changes interacting with the latter until a new equilibrium is reached. I assume that – at the end of the process – phonological segments (and the related categorical processes) take the form of continuous neural states represented by nested cortical oscillatory rhythms spatially distributed in the auditory cortex. Within this perspective, distinctive features (i.e., the relevant representational linguistic primitives) are represented by both spatially local and distributed neural selectivity. I suggest that this hypothesis is suitable to explain hierarchical layout of auditory cortex highly specialized in analyzing different aspects of the speech signal and to explain learning and memory processes during the acquisition of phonological systems.


Author(s):  
Vesa Putkinen ◽  
Mari Tervaniemi

Studies conducted during the last three decades have identified numerous differences between musicians and non-musicians in neural correlates of sensory, motor, and higher-order cognitive functions. Research employing event-related potentials/fields has been particularly important in this framework. This chapter reviews the evidence that has emerged from these studies with emphasis on longitudinal studies comparing functional brain development in children taking music lessons and those engaged in non-musical activities. The literature provides empirical and theoretical grounds for concluding that musical training enhances sound encoding skills that are relevant for both music and speech processing. The question whether the benefits of musical training transfer to more distantly related cognitive functions remains controversial, however. Finally, it appears likely that training-induced plasticity alone does not account for the differences in brain function between musicians and non-musicians and, conversely, that predisposing factors also play a role.



2004 ◽  
Vol 100 (3) ◽  
pp. 617-625 ◽  
Author(s):  
Wolfgang Heinke ◽  
Ramona Kenntner ◽  
Thomas C. Gunter ◽  
Daniela Sammler ◽  
Derk Olthoff ◽  
...  

Background It is an open question whether cognitive processes of auditory perception that are mediated by functionally different cortices exhibit the same sensitivity to sedation. The auditory event-related potentials P1, mismatch negativity (MMN), and early right anterior negativity (ERAN) originate from different cortical areas and reflect different stages of auditory processing. The P1 originates mainly from the primary auditory cortex. The MMN is generated in or in the close vicinity of the primary auditory cortex but is also dependent on frontal sources. The ERAN mainly originates from frontal generators. The purpose of the study was to investigate the effects of increasing propofol sedation on different stages of auditory processing as reflected in P1, MMN, and ERAN. Methods The P1, the MMN, and the ERAN were recorded preoperatively in 18 patients during four levels of anesthesia adjusted with target-controlled infusion: awake state (target concentration of propofol 0.0 microg/ml), light sedation (0.5 microg/ml), deep sedation (1.5 microg/ml), and unconsciousness (2.5-3.0 microg/ml). Simultaneously, propofol anesthesia was assessed using the Bispectral Index. Results Propofol sedation resulted in a progressive decrease in amplitudes and an increase of latencies with a similar pattern for MMN and ERAN. MMN and ERAN were elicited during sedation but were abolished during unconsciousness. In contrast, the amplitude of the P1 was unchanged by sedation but markedly decreased during unconsciousness. Conclusion The results indicate differential effects of propofol sedation on cognitive functions that involve mainly the auditory cortices and cognitive functions that involve the frontal cortices.



2018 ◽  
Author(s):  
Anna Dora Manca ◽  
Francesco Di Russo ◽  
Francesco Sigona ◽  
Mirko Grimaldi

How the brain encodes the speech acoustic signal into phonological representations (distinctive features) is a fundamental question for the neurobiology of language. Whether this process is characterized by tonotopic maps in primary or secondary auditory areas, with bilateral or leftward activity, remains a long-standing challenge. Magnetoencephalographic and ECoG studies have previously failed to show hierarchical and asymmetric hints for speech processing. We employed high-density electroencephalography to map the Salento Italian vowel system onto cortical sources using the N1 auditory evoked component. We found evidence that the N1 is characterized by hierarchical and asymmetric indexes structuring vowels representation. We identified them with two N1 subcomponents: the typical N1 (N1a) peaking at 125-135 ms and localized in the primary auditory cortex bilaterally with a tangential distribution and a late phase of the N1 (N1b) peaking at 145-155 ms and localized in the left superior temporal gyrus with a radial distribution. Notably, we showed that the processing of distinctive feature representations begins early in the primary auditory cortex and carries on in the superior temporal gyrus along lateral-medial, anterior-posterior and inferior-superior gradients. It is the dynamical interface of both auditory cortices and the interaction effects between different distinctive features that generate the categorical representations of vowels.



2019 ◽  
Author(s):  
NC Higgins ◽  
DF Little ◽  
BD Yerkes ◽  
KM Nave ◽  
A Kuruvilla-Mathew ◽  
...  

AbstractUnderstanding the neural underpinning of conscious perception remains one of the primary challenges of cognitive neuroscience. Theories based mostly on studies of the visual system differ according to whether the neural activity giving rise to conscious perception occurs in modality-specific sensory cortex or in associative areas, such as the frontal and parietal cortices. Here, we search for modality-specific conscious processing in the auditory cortex using a bistable stream segregation paradigm that presents a constant stimulus without the confounding influence of physical changes to sound properties. ABA_ triplets (i.e., alternating low, A, and high, B, tones, and _ gap) with a 700 ms silent response period after every third triplet were presented repeatedly, and human participants reported nearly equivalent proportions of 1- and 2-stream percepts. The pattern of behavioral responses was consistent with previous studies of visual and auditory bistable perception. The intermittent response paradigm has the benefit of evoking spontaneous perceptual switches that can be attributed to a well-defined stimulus event, enabling precise identification of the timing of perception-related neural events with event-related potentials (ERPs). Significantly more negative ERPs were observed for 2-streams compared to 1-stream, and for switches compared to non-switches during the sustained potential (500-1000 ms post-stimulus onset). Further analyses revealed that the negativity associated with switching was independent of switch direction, suggesting that spontaneous changes in perception have a unique neural signature separate from the observation that 2-streams has more negative ERPs than 1-stream. Source analysis of the sustained potential showed activity associated with these differences originating in anterior superior temporal gyrus, indicating involvement of the ventral auditory pathway that is important for processing auditory objects.Significance StatementWhen presented with ambiguous stimuli, the auditory system takes the available information and attempts to construct a useful percept. When multiple percepts are possible from the same stimuli, however, perception fluctuates back and forth between alternating percepts in a bistable manner. Here, we examine spontaneous switches in perception using a bistable auditory streaming paradigm with a novel intermittent stimulus paradigm, and measure sustained electrical activity in anterior portions of auditory cortex using event-related potentials. Analyses revealed enhanced sustained cortical activity when perceiving 2-streams compared to 1-stream, and when a switch occurred regardless of switch direction. These results indicate that neural responses in auditory cortex reflect both the content of perception and neural dynamics related to switches in perception.



2019 ◽  
Author(s):  
Stefania Ferraro ◽  
Markus J. Van Ackeren ◽  
Roberto Mai ◽  
Laura Tassi ◽  
Francesco Cardinale ◽  
...  

AbstractUnequivocally demonstrating the presence of multisensory signals at the earliest stages of cortical processing remains challenging in humans. In our study, we relied on the unique spatio-temporal resolution provided by intracranial stereotactic electroencephalographic (SEEG) recordings in patients with drug-resistant epilepsy to characterize the signal extracted from early visual (calcarine and pericalcarine) and auditory (Heschl’s gyrus and planum temporale) regions during a simple audio-visual oddball task. We provide evidences that both cross-modal responses (visual responses in auditory cortex or the reverse) and multisensory processing (alteration of the unimodal responses during bimodal stimulation) can be observed in intracranial event-related potentials (iERPs) and in power modulations of oscillatory activity at different temporal scales within the first 150 ms after stimulus onset. The temporal profiles of the iERPs are compatible with the hypothesis that MSI occurs by means of direct pathways linking early visual and auditory regions. Our data indicate, moreover, that MSI mainly relies on modulations of the low-frequency bands (foremost the theta band in the auditory cortex and the alpha band in the visual cortex), suggesting the involvement of feedback pathways between the two sensory regions. Remarkably, we also observed high-gamma power modulations by sounds in the early visual cortex, thus suggesting the presence of neuronal populations involved in auditory processing in the calcarine and pericalcarine region in humans.



1990 ◽  
Vol 2 (4) ◽  
pp. 344-357 ◽  
Author(s):  
Mikko Sams ◽  
Reijo Aulanko ◽  
Olli Aaltonen ◽  
Risto Näätänen

Event-related potentials (ERPs) to synthetic consonant–vowel syllables were recorded. Infrequent changes in such a syllable elicited a "mismatch negativity" as well as an enhanced N100 component of the ERP even when subjects did not pay attention to the stimuli. Both components are probably generated in the supratemporal auditory cortex suggesting that in these areas there are neural networks that are automatically activated by speech-specific auditory stimulus features such as formant transitions.



2013 ◽  
Vol 109 (8) ◽  
pp. 2086-2096 ◽  
Author(s):  
Björn Herrmann ◽  
Molly J. Henry ◽  
Jonas Obleser

In auditory cortex, activation and subsequent adaptation is strongest for regions responding best to a stimulated tone frequency and less for regions responding best to other frequencies. Previous attempts to characterize the spread of neural adaptation in humans investigated the auditory cortex N1 component of the event-related potentials. Importantly, however, more recent studies in animals show that neural response properties are not independent of the stimulation context. To link these findings in animals to human scalp potentials, we investigated whether contextual factors of the acoustic stimulation, namely, spectral variance, affect the spread of neural adaptation. Electroencephalograms were recorded while human participants listened to random tone sequences varying in spectral variance (narrow vs. wide). Spread of adaptation was investigated by modeling single-trial neural adaptation and subsequent recovery based on the spectro-temporal stimulation history. Frequency-specific neural responses were largest on the N1 component, and the modeled neural adaptation indices were strongly predictive of trial-by-trial amplitude variations. Yet the spread of adaption varied depending on the spectral variance in the stimulation, such that adaptation spread was broadened for tone sequences with wide spectral variance. Thus the present findings reveal context-dependent auditory cortex adaptation and point toward a flexibly adjusting auditory system that changes its response properties with the spectral requirements of the acoustic environment.



2021 ◽  
Author(s):  
Kelsey Mankel ◽  
Utsav Shrestha ◽  
Aaryani Tipirneni-Sajja ◽  
Gavin Bidelman

Categorizing sounds into meaningful groups helps listeners more efficiently process the auditory scene and is a foundational skill for speech perception and language development. Yet, how auditory categories develop in the brain through learning, particularly for nonspeech sounds, is not well understood. Here, we asked musically naïve listeners to complete a brief (~20 min) training session where they learned to identify sounds from a nonspeech continuum (minor-major 3rd musical intervals). We used multichannel EEG to track behaviorally relevant neuroplastic changes in the auditory event-related potentials (ERPs) pre- to post-training. To rule out mere exposure-induced changes, neural effects were evaluated against a control group of 14 nonmusicians who did not undergo training. We also compared individual categorization performance with structural volumetrics of bilateral primary auditory cortex (PAC) from MRI to evaluate neuroanatomical substrates of learning. Behavioral performance revealed steeper (i.e., more categorical) identification functions in the posttest that correlated with better training accuracy. At the neural level, improvement in learners' behavioral identification was characterized by smaller P2 amplitudes at posttest, particularly over right hemisphere. Critically, learning-related changes in the ERPs were not observed in control listeners, ruling out mere exposure effects. Learners also showed smaller and thinner PAC bilaterally, indicating superior categorization was associated with structural differences in primary auditory brain regions. Collectively, our data suggest successful auditory categorical learning of nonspeech sounds is characterized by short-term functional changes (i.e., greater post-training efficiency) in sensory coding processes superimposed on preexisting structural differences in bilateral auditory cortex.



2000 ◽  
Vol 12 (4) ◽  
pp. 635-647 ◽  
Author(s):  
G. Dehaene-Lambertz ◽  
E. Dupoux ◽  
A. Gout

It is well known that speech perception is deeply affected by the phoneme categories of the native language. Recent studies have found that phonotactics, i.e., constraints on the cooccurrence of phonemes within words, also have a considerable impact on speech perception routines. For example, Japanese does not allow (nonasal) coda consonants. When presented with stimuli that violate this constraint, as in / ebzo/, Japanese adults report that they hear a /u/ between consonants, i.e., /ebuzo/. We examine this phenomenon using event-related potentials (ERPs) on French and Japanese participants in order to study how and when the phonotactic properties of the native language affect speech perception routines. Trials using four similar precursor stimuli were presented followed by a test stimulus that was either identical or different depending on the presence or absence of an epenthetic vowel /u/ between two consonants (e.g., “ebuzo ebuzo ebuzo—ebzo”). Behavioral results confirm that Japanese, unlike French participants, are not able to discriminate between identical and deviant trials. In ERPs, three mismatch responses were recorded in French participants. These responses were either absent or significantly weaker for Japanese. In particular, a component similar in latency and topography to the mismatch negativity (MMN) was recorded for French, but not for Japanese participants. Our results suggest that the impact of phonotactics takes place early in speech processing and support models of speech perception, which postulate that the input signal is directly parsed into the native language phonological format. We speculate that such a fast computation of a phonological representation should facilitate lexical access, especially in degraded conditions.



2021 ◽  
Vol 12 ◽  
Author(s):  
Yubin Zhang ◽  
Chotiga Pattamadilok ◽  
Dustin Kai-Yan Lau ◽  
Mehdi Bakhtiar ◽  
Long-Ying Yim ◽  
...  

The acquisition of an alphabetic orthography transforms speech processing in the human brain. Behavioral evidence shows that phonological awareness as assessed by meta-phonological tasks like phoneme judgment, is enhanced by alphabetic literacy acquisition. The current study investigates the time-course of the neuro-cognitive operations underlying this enhancement as revealed by event-related potentials (ERPs). Chinese readers with and without proficiency in Jyutping, a Romanization system of Cantonese, were recruited for an auditory onset phoneme judgment task; their behavioral responses and the elicited ERPs were examined. Proficient readers of Jyutping achieved higher response accuracy and exhibited more negative-going ERPs in three early ERP time-windows corresponding to the P1, N1, and P2 components. The phonological mismatch negativity component exhibited sensitivity to both onset and rhyme mismatch in the speech stimuli, but it was not modulated by alphabetic literacy skills. The sustained negativity in the P1-N1-P2 time-windows is interpreted as reflecting enhanced phonetic/phonological processing or attentional/awareness modulation associated with alphabetic literacy and phonological awareness skills.



Sign in / Sign up

Export Citation Format

Share Document