Developmental Changes in Speech Discrimination in Infants

1977 ◽  
Vol 20 (4) ◽  
pp. 766-780 ◽  
Author(s):  
Rebecca E. Eilers ◽  
Wesley R. Wilson ◽  
John M. Moore

A visually reinforced infant speech discrimination (VRISD) paradigm is described and evaluated. Infants at two ages were tested with the new paradigm on the following speech contrasts: [sa] vs [va], [sa] vs [∫a], [sa] vs [za], [as] vs [a:z], [a:s], vs [a:z], [at] vs [a:d], [a:t] vs [a:d], [at] vs [a:t], [fa] vs [θa], and [fi] vs [θi]. The data reported are compared with data on the same speech contrasts obtained from three month olds in a high-amplitude sucking paradigm. Evidence suggesting developmental changes in speech-sound discriminatory ability is reported. Results are interpreted in light of salience of available acoustic cues and in terms of new methodological advances.

2021 ◽  
Author(s):  
Jessie S. Nixon ◽  
Fabian Tomaschek

In the last two decades, statistical clustering models have emerged as a dominant model of how infants learn the sounds of their language. However, recent empirical and computational evidence suggests that purely statistical clustering methods may not be sufficient to explain speech sound acquisition. To model early development of speech perception, the present study used a two-layer network trained with Rescorla-Wagner learning equations, an implementation of discriminative, error-driven learning. The model contained no a priori linguistic units, such as phonemes or phonetic features. Instead, expectations about the upcoming acoustic speech signal were learned from the surrounding speech signal, with spectral components extracted from an audio recording of child-directed speech as both inputs and outputs of the model. To evaluate model performance, we simulated infant responses in the high-amplitude sucking paradigm using vowel and fricative pairs and continua. The simulations were able to discriminate vowel and consonant pairs and predicted the infant speech perception data. The model also showed the greatest amount of discrimination in the expected spectral frequencies. These results suggest that discriminative error-driven learning may provide a viable approach to mod- elling early infant speech sound acquisition.


1991 ◽  
Vol 34 (3) ◽  
pp. 643-650 ◽  
Author(s):  
Robert J. Nozza ◽  
Sandra L. Miller ◽  
Reva N. F. Rossman ◽  
Linda C. Bond

Infants were tested on a speech-sound discrimination-in-noise task using the visual reinforcement infant speech discrimination (VRISD) procedure with an adaptive (up-down) threshold protocol. An adult control group was tested using the same stimuli and apparatus. The speech sounds were synthetic /ba/ and /ga/. The masker was band-passed noise presented continuously at 48 dB SPL. Test-retest reliability was good for both groups, although test-retest differences were smaller for adults. For infants the mean of the absolute values of the differences between tests was only 5.2 dB, and there was less than a 10-dB difference between the two tests of 14 (87.5%) of the 16 infants completing the study. The infant-adult difference in discrimination threshold in noise was 6.9 dB, which agrees well with detection-in-noise thresholds from earlier studies and with discrimination-in-noise thresholds obtained on a subset of subjects in our earlier work. Advantages of the adaptive threshold procedure and its possible applications both in research studies and in the clinic are discussed.


1975 ◽  
Vol 18 (1) ◽  
pp. 158-167 ◽  
Author(s):  
Rebecca E. Eilers ◽  
Fred D. Minifie

In three separate experiments using controlled natural stimuli and a high-amplitude sucking paradigm, infants' ability to detect differences between /s/ and /v/, /s/ and /∫/, and /s/ and /z/, respectively, was investigated. Evidence for discrimination was obtained for /s/ versus /v/ and /s/ versus /∫/ but not for /s/ versus /z/. Implications for a theory of infant speech perception are discussed.


1982 ◽  
Vol 9 (2) ◽  
pp. 289-302 ◽  
Author(s):  
Rebecca E. Eilers ◽  
William J. Gavin ◽  
D. Kimbrough Oller

ABSTRACTThe possibility that early linguistic experience affects infant speech perception was investigated in a cross-linguistic study with naturally produced speech stimuli. Using the Visually Reinforced Infant Speech Discrimination paradigm, three contrasts were presented to Spanish-and English-learning infants 6–8 months of age. Both groups of infants showed statistically significant discrimination of two contrasts, English and Czech. Only the Spanish-learning infants provided evidence of discrimination of the Spanish contrast. The groups discriminated the English contrast at similarly high levels, but the Spanish-learning infants showed significantly higher performance than the English on both the Spanish and the Czech contrast. The results indicate that early experience does affect early discrimination, and further (since the stimuli were natural) that the effect may be of practical consequence in language learning.


2013 ◽  
Vol 60 (1) ◽  
Author(s):  
Aseel Almeqbel

Objective: Cortical auditory-evoked potentials (CAEPs), an objective measure of human speech encoding in individuals with normal or impaired auditory systems, can be used to assess the outcomes of hearing aids and cochlear implants in infants, or in young children who cannot co-operate for behavioural speech discrimination testing. The current study aimed to determine whether naturally produced speech stimuli /m/, /g/ and /t/ evoke distinct CAEP response patterns that can be reliably recorded and differentiated, based on their spectral information and whether the CAEP could be an electrophysiological measure to differentiate between these speech sounds.Method: CAEPs were recorded from 18 school-aged children with normal hearing, tested in two groups: younger (5 - 7 years) and older children (8 - 12 years). Cortical responses differed in their P1 and N2 latencies and amplitudes in response to /m/, /g/ and /t/ sounds (from low-, mid- and high-frequency regions, respectively). The largest amplitude of the P1 and N2 component was for /g/ and the smallest was for /t/. The P1 latency in both age groups did not show any significant difference between these speech sounds. The N2 latency showed a significant change in the younger group but not in the older group. The N2 latency of the speech sound /g/ was always noted earlier in both groups.Conclusion: This study demonstrates that spectrally different speech sounds are encoded differentially at the cortical level, and evoke distinct CAEP response patterns. CAEP latencies and amplitudes may provide an objective indication that spectrally different speech sounds are encoded differently at the cortical level.


2020 ◽  
Vol 27 (6) ◽  
pp. 1104-1125
Author(s):  
Anne Marie Crinnion ◽  
Beth Malmskog ◽  
Joseph C. Toscano

Sign in / Sign up

Export Citation Format

Share Document