Speech Perception in Noise by Children for Whom English Is a Second Language

1996 ◽  
Vol 5 (3) ◽  
pp. 47-51 ◽  
Author(s):  
Carl C. Crandell ◽  
Joseph J. Smaldino

Appropriate classroom acoustics and academic achievement of children is known to be correlated. To date, however, there remains a lack of research concerning the importance of classroom acoustics for children for whom English is a second language (ESL). This investigation examined the speech perception of 20 children whose native language is English and 20 ESL children under commonly reported classroom signal-to-noise ratios (SNR). Sentence perception was assessed by the Bamford-Koval-Bench Standard Sentence Test. Multibabble was used as the noise competition. Results indicated that the ESL children's performance was significantly poorer across most listening conditions. In addition, perceptual differences between the two groups increased as the SNR became less favorable. These data will be discussed with respect to the educational management of ESL children.

2017 ◽  
Vol 56 (8) ◽  
pp. 568-579 ◽  
Author(s):  
Christi W. Miller ◽  
Ruth A. Bentler ◽  
Yu-Hsiang Wu ◽  
James Lewis ◽  
Kelly Tremblay

2016 ◽  
Vol 32 (3) ◽  
pp. 367-395 ◽  
Author(s):  
Shannon Barrios ◽  
Nan Jiang ◽  
William J Idsardi

Adult second language (L2) learners often experience difficulty producing and perceiving nonnative phonological contrasts. Even relatively advanced learners, who have been exposed to an L2 for long periods of time, struggle with difficult contrasts, such as /ɹ/–/l/ for Japanese learners of English. To account for the relative ease or difficulty with which L2 learners perceive and acquire nonnative contrasts, theories of L2 speech perception and phonology often appeal to notions of ‘similarity’, but how is ‘similarity’ best captured? In this article, we review two prominent approaches to similarity in L2 speech perception and phonology and present the findings from two experiments that investigated the role of phonological features in the perception and lexical representation of two vowel contrasts that exist in English, but not in Spanish. In particular, we explored whether L1 phonological features can be reused to represent nonnative contrasts in the second language (Brown, 1998, 2000), as well as to what extent new phonological structure might be acquired by advanced late-learners. We show that second language acquisition of phonology is not constrained by the phonological features made available by the learner’s native language grammar, nor is the use of particular phonological features in the native language grammar sufficient to trigger redeployment. These findings suggest that feature availability is neither a necessary, nor a sufficient, condition to predict the observed learning outcomes. These results are discussed in the context of current theories of nonnative and L2 speech perception and phonological development.


2019 ◽  
Vol 40 (2) ◽  
pp. 585-611
Author(s):  
ALEXANDER J. KILPATRICK ◽  
RIKKE L. BUNDGAARD-NIELSEN ◽  
BRETT J. BAKER

ABSTRACTMost current models of nonnative speech perception (e.g., extended perceptual assimilation model, PAM-L2, Best & Tyler, 2007; speech learning model, Flege, 1995; native language magnet model, Kuhl, 1993) base their predictions on the native/nonnative status of individual phonetic/phonological segments. This paper demonstrates that the phonotactic properties of Japanese influence the perception of natively contrasting consonants and suggests that phonotactic influence must be formally incorporated in these models. We first propose that by extending the perceptual categories outlined in PAM-L2 to incorporate sequences of sounds, we can account for the effects of differences in native and nonnative phonotactics on nonnative and cross-language segmental perception. In addition, we test predictions based on such an extension in two perceptual experiments. In Experiment 1, Japanese listeners categorized and rated vowel–consonant–vowel strings in combinations that either obeyed or violated Japanese phonotactics. The participants categorized phonotactically illegal strings to the perceptually nearest (legal) categories. In Experiment 2, participants discriminated the same strings in AXB discrimination tests. Our results show that Japanese listeners are more accurate and have faster response times when discriminating between legal strings than between legal and illegal strings. These findings expose serious shortcomings in currently accepted nonnative perception models, which offer no framework for the influence of native language phonotactics.


2006 ◽  
Vol 17 (08) ◽  
pp. 605-616 ◽  
Author(s):  
Samantha M. Lewis ◽  
Michele Hutter ◽  
David J. Lilly ◽  
Dennis Bourdette ◽  
Julie Saunders ◽  
...  

Almost half of the population with multiple sclerosis (MS) complains of difficulty hearing, despite having essentially normal pure-tone thresholds. The purpose of the present investigation was to evaluate the effects of frequency-modulation (FM) technology utilization on speech perception in noise for adults with and without MS. Sentence material was presented at a constant level of 65 dBA Leq from a loudspeaker located at 0°azimuth. The microphone of the FM transmitter was placed 7.5 cm from this loudspeaker. Multitalker babble was presented from four loudspeakers positioned at 45°, 135°, 225°, and 315° azimuths. The starting presentation level for the babble was 55 dBA Leq. The level of the noise was increased systematically in 1 dB steps until the subject obtained 0% key words correct on the IEEE (Institute for Electrical and Electronic Engineers) sentences. Test results revealed significant differences between the unaided and aided conditions at several signal-to-noise ratios.


2021 ◽  
pp. 002383092098336
Author(s):  
Max R. Freeman ◽  
Henrike K. Blumenfeld ◽  
Matthew T. Carlson ◽  
Viorica Marian

While listening to non-native speech, second language users filter the auditory input through their native language. We examined how bilinguals perceived second language (L2 English) sound sequences that conflicted with native-language (L1 Spanish) constraints across three experiments with different task demands. We used the L1 Spanish phonotactic constraint (i.e., rule for combining speech sounds) that vowels must precede s+consonant clusters (e.g., Spanish: estricto, “strict”). This L1 Spanish constraint may influence Spanish-English bilinguals’ processing of L2 English words such as strict because of a missing initial vowel, as in estrict. We found that the extent to which bilinguals were influenced by the L1 during L2 processing depended on task demands. When metalinguistic awareness demands were low, as in the AX word discrimination task (Experiment 1), cross-linguistic effects were not observed. When metalinguistic awareness demands were high, as in the vowel detection (Experiment 2) and lexical decision (Experiment 3) tasks, response times demonstrated that bilinguals were influenced by the L1 constraint when processing L2 words beginning with an s+consonant. We conclude that bilinguals are cross-linguistically influenced by L1 phonotactic constraints during L2 processing when metalinguistic demands are higher, suggesting that L2 input may be mapped onto L1 sub-lexical representations during perception. These results extend previous research on language co-activation and speech perception by providing a more fine-grained understanding of task demands and elucidating when and where cross-linguistic phonotactic access is present during bilingual comprehension.


Author(s):  
Yones Lotfi ◽  
Jamileh Chupani ◽  
Mohanna Javanbakht ◽  
Enayatollah Bakhshi

Background and Aim: In most everyday sett­ings, speech is heard in the presence of com­peting sounds and speech perception in noise is affected by various factors, including cognitive factors. In this regard, bilingualism is a pheno­menon that changes cognitive and behavioral processes as well as the nervous system. This study aimed to evaluate speech perception in noise and compare differences in Kurd-Persian bilinguals versus Persian monolinguals. Methods: This descriptive-analytic study was performed on 92 students with normal hearing, 46 of whom were bilingual Kurd-Persian with a mean (SD) age of 22.73 (1.92) years, and 46 other Persian monolinguals with a mean (SD) age of 22.71 (2.28) years. They were examined by consonant-vowel in noise (CV in noise) test and quick speech in noise (Q-SIN) test. The obtained data were analyzed by SPSS 21. Results: The comparison of the results showed differences in both tests between bilingual and monolingual subjects. In both groups, the reduc­tion of signal-to-noise ratio led to lower scores, but decrease in CV in noise test in bilinguals was less than monolinguals (p < 0.001) and in the Q-SIN test, the drop in bilinguals’ score was  more than monolinguals (p = 0.002). Conclusion: Kurd-Persian bilinguals had a bet­ter performance in CV in noise test but had a worse performance in Q-SIN test than Persian monolinguals.


Author(s):  
Ocke-Schwen Bohn

The study of second language phonetics is concerned with three broad and overlapping research areas: the characteristics of second language speech production and perception, the consequences of perceiving and producing nonnative speech sounds with a foreign accent, and the causes and factors that shape second language phonetics. Second language learners and bilinguals typically produce and perceive the sounds of a nonnative language in ways that are different from native speakers. These deviations from native norms can be attributed largely, but not exclusively, to the phonetic system of the native language. Non-nativelike speech perception and production may have both social consequences (e.g., stereotyping) and linguistic–communicative consequences (e.g., reduced intelligibility). Research on second language phonetics over the past ca. 30 years has resulted in a fairly good understanding of causes of nonnative speech production and perception, and these insights have to a large extent been driven by tests of the predictions of models of second language speech learning and of cross-language speech perception. It is generally accepted that the characteristics of second language speech are predominantly due to how second language learners map the sounds of the nonnative to the native language. This mapping cannot be entirely predicted from theoretical or acoustic comparisons of the sound systems of the languages involved, but has to be determined empirically through tests of perceptual assimilation. The most influential learner factors which shape how a second language is perceived and produced are the age of learning and the amount and quality of exposure to the second language. A very important and far-reaching finding from research on second language phonetics is that age effects are not due to neurological maturation which could result in the attrition of phonetic learning ability, but to the way phonetic categories develop as a function of experience with surrounding sound systems.


1988 ◽  
Vol 31 (1) ◽  
pp. 108-114 ◽  
Author(s):  
H. Donell Lewis ◽  
Vernon A. Benignus ◽  
Keith E. Muller ◽  
Carolin M. Malott ◽  
Curtis N. Barton

"Perceptual" masking of speech by multitalker speech (babble) has been widely reported but poorly quantified. Furthermore, the validity of the construct of perceptual masking is questionable. This report describes an experiment using a newly standardized test of speech perception in noise (SPIN) with both babble and spectrally matched random-noise maskers. Classical psychophysieal ogive curves were used to model speech recognition as a function of signal-to-noise ratio (S/N). The two maskers yielded speech recognition functions of the same steepness but different locations on the S/N axis. The high-context items of SPIN yielded speech recognition curves with steeper slope and different locations on the S/N axis than the low-context items. These data are used to argue that perceptual masking was not documented (under certain assumptions) and that the superior masking of babble may be explained in purely acoustical terms. Speculations are offered about the possible acoustical differences that could be responsible for the differences in masking effect.


2012 ◽  
Vol 23 (08) ◽  
pp. 590-605 ◽  
Author(s):  
Richard H. Wilson ◽  
Rachel McArdle ◽  
Kelly L. Watts ◽  
Sherri L. Smith

Background: The Revised Speech Perception in Noise Test (R-SPIN; Bilger, 1984b) is composed of 200 target words distributed as the last words in 200 low-predictability (LP) and 200 high-predictability (HP) sentences. Four list pairs, each consisting of two 50-sentence lists, were constructed with the target word in a LP and HP sentence. Traditionally the R-SPIN is presented at a signal-to-noise ratio (SNR, S/N) of 8 dB with the listener task to repeat the last word in the sentence. Purpose: The purpose was to determine the practicality of altering the R-SPIN format from a single SNR paradigm into a multiple SNR paradigm from which the 50% points for the HP and LP sentences can be calculated. Research Design: Three repeated measures experiments were conducted. Study Sample: Forty listeners with normal hearing and 184 older listeners with pure-tone hearing loss participated in the sequence of experiments. Data Collection and Analysis: The R-SPIN sentences were edited digitally (1) to maintain the temporal relation between the sentences and babble, (2) to establish the SNRs, and (3) to mix the speech and noise signals to obtain SNRs between –1 and 23 dB. All materials were recorded on CD and were presented through an earphone with the responses recorded and analyzed at the token level. For reference purposes the Words-in-Noise Test (WIN) was included in the first experiment. Results: In Experiment 1, recognition performances by listeners with normal hearing were better than performances by listeners with hearing loss. For both groups, performances on the HP materials were better than performances on the LP materials. Performances on the LP materials and on the WIN were similar. Performances at 8 dB S/N were the same with the traditional fixed level presentation and the descending presentation level paradigms. The results from Experiment 2 demonstrated that the four list pairs of R-SPIN materials produced good first approximation psychometric functions over the –4 to 23 dB S/N range, but there were irregularities. The data from Experiment 2 were used in Experiment 3 to guide the selection of the words to be used at the various SNRs that would provide homogeneous performances at each SNR and would produce systematic psychometric functions. In Experiment 3, the 50% points were in good agreement for the LP and HP conditions within both groups of listeners. The psychometric functions for List Pairs 1 and 2, 3 and 4, and 5 and 6 had similar characteristics and maintained reasonable separations between the HP and LP functions, whereas the HP and LP functions for List Pair 7 and 8 bisected one another at the lower SNRs. Conclusions: This study indicates that the R-SPIN can be configured into a multiple SNR paradigm. A more in-depth study with the R-SPIN materials is needed to develop lists that are systematic and reasonably equivalent for use on listeners with hearing loss. The approach should be based on the psychometric characteristics of the 200 HP and 200 LP sentences with the current R-SPIN lists discarded. Of importance is maintaining the synchrony between the sentences and their accompanying babble.


Sign in / Sign up

Export Citation Format

Share Document