Second Language Speech Perception and the Brain

Author(s):  
Yue Wang
2013 ◽  
Vol 18 (2) ◽  
pp. 130-144 ◽  
Author(s):  
KEES DE BOT ◽  
CAROL JAENSCH

While research on third language (L3) and multilingualism has recently shown remarkable growth, the fundamental question of what makes trilingualism special compared to bilingualism, and indeed monolingualism, continues to be evaded. In this contribution we consider whether there is such a thing as a true monolingual, and if there is a difference between dialects, styles, registers and languages. While linguistic and psycholinguistic studies suggest differences in the processing of a third, compared to the first or second language, neurolinguistic research has shown that generally the same areas of the brain are activated during language use in proficient multilinguals. It is concluded that while from traditional linguistic and psycholinguistic perspectives there are grounds to differentiate monolingual, bilingual and multilingual processing, a more dynamic perspective on language processing in which development over time is the core issue, leads to a questioning of the notion of languages as separate entities in the brain.


2019 ◽  
pp. 105-112
Author(s):  
Risto Näätänen ◽  
Teija Kujala ◽  
Gregory Light

This chapter shows that MMN and its magnetoencephalographic (MEG) equivalent MMNm are sensitive indices of aging-related perceptual and cognitive decline. Importantly, the age-related neural changes are associated with a decrease of general brain plasticity, i.e. that of the ability of the brain to form and maintain sensory-memory traces, a necessary basis for veridical perception and appropriate cognitive brain function. MMN/MMNm to change in stimulus duration is particularly affected by aging, suggesting the increased vulnerability of temporal processing to brain aging and accounting, for instance, for a large part of speech-perception difficulties of the aged beyond the age-related peripheral hearing loss.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


Sign in / Sign up

Export Citation Format

Share Document