scholarly journals The interaction between acoustic salience and language experience in developmental speech perception: evidence from nasal place discrimination

2009 ◽  
Vol 13 (3) ◽  
pp. 407-420 ◽  
Author(s):  
Chandan R. Narayan ◽  
Janet F. Werker ◽  
Patrice Speeter Beddor
2002 ◽  
Vol 33 (4) ◽  
pp. 237-252 ◽  
Author(s):  
Susan Nittrouer

Phoneme-sized phonetic segments are often defined as the most basic unit of language organization. Two common inferences made from this description are that there are clear correlates to phonetic segments in the acoustic speech stream, and that humans have access to these segments from birth. In fact, well-replicated studies have shown that the acoustic signal of speech lacks invariant physical correlates to phonetic segments, and that the ability to recognize segmental structure is not present from the start of language learning. Instead, the young child must learn how to process the complex, generally continuous acoustic speech signal so that phonetic structure can be derived. This paper describes and reviews experiments that have revealed developmental changes in speech perception that accompany improvements in access to phonetic structure. In addition, this paper explains how these perceptual changes appear to be related to other aspects of language development, such as syntactic abilities and reading. Finally, evidence is provided that these critical developmental changes result from adequate language experience in naturalistic contexts, and accordingly suggests that intervention strategies for children with language learning problems should focus on enhancing language experience in natural contexts.


2020 ◽  
Vol 6 (30) ◽  
pp. eaba7830
Author(s):  
Laurianne Cabrera ◽  
Judit Gervain

Speech perception is constrained by auditory processing. Although at birth infants have an immature auditory system and limited language experience, they show remarkable speech perception skills. To assess neonates’ ability to process the complex acoustic cues of speech, we combined near-infrared spectroscopy (NIRS) and electroencephalography (EEG) to measure brain responses to syllables differing in consonants. The syllables were presented in three conditions preserving (i) original temporal modulations of speech [both amplitude modulation (AM) and frequency modulation (FM)], (ii) both fast and slow AM, but not FM, or (iii) only the slowest AM (<8 Hz). EEG responses indicate that neonates can encode consonants in all conditions, even without the fast temporal modulations, similarly to adults. Yet, the fast and slow AM activate different neural areas, as shown by NIRS. Thus, the immature human brain is already able to decompose the acoustic components of speech, laying the foundations of language learning.


2009 ◽  
Vol 20 (9) ◽  
pp. 1064-1069 ◽  
Author(s):  
Jeffrey S. Bowers ◽  
Sven L. Mattys ◽  
Suzanne H. Gage

Previous research suggests that a language learned during early childhood is completely forgotten when contact to that language is severed. In contrast with these findings, we report leftover traces of early language exposure in individuals in their adult years, despite a complete absence of explicit memory for the language. Specifically, native English individuals under age 40 selectively relearned subtle Hindi or Zulu sound contrasts that they once knew. However, individuals over 40 failed to show any relearning, and young control participants with no previous exposure to Hindi or Zulu showed no learning. This research highlights the lasting impact of early language experience in shaping speech perception, and the value of exposing children to foreign languages even if such exposure does not continue into adulthood.


2019 ◽  
Author(s):  
Matthew H. Davis ◽  
Samuel Evans ◽  
Kathleen McCarthy ◽  
Lindsey Evans ◽  
Anastasia Giannakopoulou ◽  
...  

The role of neurobiologically-constrained critical periods for language learning remains controversial. We provide new evidence for critical periods by examining speech sound processing across the lifespan. We tested perceptual acuity for minimal word-word (e.g. bear-pear), and word-pseudoword (e.g. bag-pag) pairs using trial-unique audio-morphed speech tokens. Participants (N=1537) performed a 3-interval, 2-alternative forced-choice perceptual task indicating which of two cartoon characters said a referent word correctly. We adaptively reduced the contrastive acoustic cues in speech tokens to measure the Proportion of Acoustic Difference Required for Identification (PADRI) at 79.4% correct. Results showed effects of age, lexical context, and language experience on perceptual acuity. However, for native-listeners responding to word-word trials, age-related improvements stopped at 16.7 years. This finding suggests a role for continued lexical experience in shaping perceptual acuity for spoken words until late adolescence consistent with interactive models of speech perception and critical periods.


Sign in / Sign up

Export Citation Format

Share Document