scholarly journals First-language influence on second language speech perception depends on task demands

2021 ◽  
pp. 002383092098336
Author(s):  
Max R. Freeman ◽  
Henrike K. Blumenfeld ◽  
Matthew T. Carlson ◽  
Viorica Marian

While listening to non-native speech, second language users filter the auditory input through their native language. We examined how bilinguals perceived second language (L2 English) sound sequences that conflicted with native-language (L1 Spanish) constraints across three experiments with different task demands. We used the L1 Spanish phonotactic constraint (i.e., rule for combining speech sounds) that vowels must precede s+consonant clusters (e.g., Spanish: estricto, “strict”). This L1 Spanish constraint may influence Spanish-English bilinguals’ processing of L2 English words such as strict because of a missing initial vowel, as in estrict. We found that the extent to which bilinguals were influenced by the L1 during L2 processing depended on task demands. When metalinguistic awareness demands were low, as in the AX word discrimination task (Experiment 1), cross-linguistic effects were not observed. When metalinguistic awareness demands were high, as in the vowel detection (Experiment 2) and lexical decision (Experiment 3) tasks, response times demonstrated that bilinguals were influenced by the L1 constraint when processing L2 words beginning with an s+consonant. We conclude that bilinguals are cross-linguistically influenced by L1 phonotactic constraints during L2 processing when metalinguistic demands are higher, suggesting that L2 input may be mapped onto L1 sub-lexical representations during perception. These results extend previous research on language co-activation and speech perception by providing a more fine-grained understanding of task demands and elucidating when and where cross-linguistic phonotactic access is present during bilingual comprehension.

2019 ◽  
Vol 40 (2) ◽  
pp. 585-611
Author(s):  
ALEXANDER J. KILPATRICK ◽  
RIKKE L. BUNDGAARD-NIELSEN ◽  
BRETT J. BAKER

ABSTRACTMost current models of nonnative speech perception (e.g., extended perceptual assimilation model, PAM-L2, Best & Tyler, 2007; speech learning model, Flege, 1995; native language magnet model, Kuhl, 1993) base their predictions on the native/nonnative status of individual phonetic/phonological segments. This paper demonstrates that the phonotactic properties of Japanese influence the perception of natively contrasting consonants and suggests that phonotactic influence must be formally incorporated in these models. We first propose that by extending the perceptual categories outlined in PAM-L2 to incorporate sequences of sounds, we can account for the effects of differences in native and nonnative phonotactics on nonnative and cross-language segmental perception. In addition, we test predictions based on such an extension in two perceptual experiments. In Experiment 1, Japanese listeners categorized and rated vowel–consonant–vowel strings in combinations that either obeyed or violated Japanese phonotactics. The participants categorized phonotactically illegal strings to the perceptually nearest (legal) categories. In Experiment 2, participants discriminated the same strings in AXB discrimination tests. Our results show that Japanese listeners are more accurate and have faster response times when discriminating between legal strings than between legal and illegal strings. These findings expose serious shortcomings in currently accepted nonnative perception models, which offer no framework for the influence of native language phonotactics.


2020 ◽  
pp. 136216882091402
Author(s):  
James F. Lee ◽  
Paul A. Malovrh ◽  
Stephen Doherty ◽  
Alecia Nichols

Recent research on the effects of processing instruction (PI) have incorporated online research methods in order to demonstrate that PI has effects on cognitive processing behaviors as well as on accuracy (e.g. Lee & Doherty, 2019a). The present study uses self-paced reading and a moving windows technique to examine the effects of PI on second language (L2) learners’ processing of Spanish active and passive sentences to explore the effects of PI on instructed second language acquisition. One group received PI but the Control group did not. Between group comparisons on passive sentences showed changes in performance for the PI group but not the Control group with the PI group gaining in accuracy and processing speed, specifically faster response times to select the correct picture and faster reading time on passive verb forms. Within group analyses showed changes in the PI group’s performance on all dependent variables at the immediate posttest and a subsequent decline in performance at the delayed posttest (8 weeks later). We discuss the implications of our results and treatment format for classroom and hybridized instruction.


2021 ◽  
pp. 026765832110306
Author(s):  
Félix Desmeules-Trudel ◽  
Tania S. Zamuner

Spoken word recognition depends on variations in fine-grained phonetics as listeners decode speech. However, many models of second language (L2) speech perception focus on units such as isolated syllables, and not on words. In two eye-tracking experiments, we investigated how fine-grained phonetic details (i.e. duration of nasalization on contrastive and coarticulatory nasalized vowels in Canadian French) influenced spoken word recognition in an L2, as compared to a group of native (L1) listeners. Results from L2 listeners (English-native speakers) indicated that fine-grained phonetics impacted the recognition of words, i.e. they were able to use nasalization duration variability in a way similar to L1-French listeners, providing evidence that lexical representations can be highly specified in an L2. Specifically, L2 listeners were able to distinguish minimal word pairs (differentiated by the presence of phonological vowel nasalization in French) and were able to use variability in a way approximating L1-French listeners. Furthermore, the robustness of the French “nasal vowel” category in L2 listeners depended on age of exposure. Early bilinguals displayed greater sensitivity to some ambiguity in the stimuli than late bilinguals, suggesting that early bilinguals had greater sensitivity to small variations in the signal and thus better knowledge of the phonetic cue associated with phonological vowel nasalization in French, similarly to L1 listeners.


Author(s):  
Paola E. Dussias ◽  
Jorge R. Valdés Kroff ◽  
Michael Johns ◽  
Álvaro Villegas

In this chapter, we survey recent contributions to the research on bilingual language processing that demonstrate how exposure to a second language, even for a brief period of time, can impact processing in the native language. We focus our discussion primarily on syntactic and morpho-syntactic processing. In light of this evidence, we argue that claims of language attrition may not be as clear-cut as one may think when online language processing is taken into account. A second goal of our chapter is to show that eye-tracking is a premier behavioural method by which we can come to understand fine-grained changes in online language processing. In doing so, we hope to illustrate how the study of online language processing via eye-tracking can help to clarify issues in language attrition.


1996 ◽  
Vol 5 (3) ◽  
pp. 47-51 ◽  
Author(s):  
Carl C. Crandell ◽  
Joseph J. Smaldino

Appropriate classroom acoustics and academic achievement of children is known to be correlated. To date, however, there remains a lack of research concerning the importance of classroom acoustics for children for whom English is a second language (ESL). This investigation examined the speech perception of 20 children whose native language is English and 20 ESL children under commonly reported classroom signal-to-noise ratios (SNR). Sentence perception was assessed by the Bamford-Koval-Bench Standard Sentence Test. Multibabble was used as the noise competition. Results indicated that the ESL children's performance was significantly poorer across most listening conditions. In addition, perceptual differences between the two groups increased as the SNR became less favorable. These data will be discussed with respect to the educational management of ESL children.


2016 ◽  
Vol 32 (3) ◽  
pp. 367-395 ◽  
Author(s):  
Shannon Barrios ◽  
Nan Jiang ◽  
William J Idsardi

Adult second language (L2) learners often experience difficulty producing and perceiving nonnative phonological contrasts. Even relatively advanced learners, who have been exposed to an L2 for long periods of time, struggle with difficult contrasts, such as /ɹ/–/l/ for Japanese learners of English. To account for the relative ease or difficulty with which L2 learners perceive and acquire nonnative contrasts, theories of L2 speech perception and phonology often appeal to notions of ‘similarity’, but how is ‘similarity’ best captured? In this article, we review two prominent approaches to similarity in L2 speech perception and phonology and present the findings from two experiments that investigated the role of phonological features in the perception and lexical representation of two vowel contrasts that exist in English, but not in Spanish. In particular, we explored whether L1 phonological features can be reused to represent nonnative contrasts in the second language (Brown, 1998, 2000), as well as to what extent new phonological structure might be acquired by advanced late-learners. We show that second language acquisition of phonology is not constrained by the phonological features made available by the learner’s native language grammar, nor is the use of particular phonological features in the native language grammar sufficient to trigger redeployment. These findings suggest that feature availability is neither a necessary, nor a sufficient, condition to predict the observed learning outcomes. These results are discussed in the context of current theories of nonnative and L2 speech perception and phonological development.


Author(s):  
Ocke-Schwen Bohn

The study of second language phonetics is concerned with three broad and overlapping research areas: the characteristics of second language speech production and perception, the consequences of perceiving and producing nonnative speech sounds with a foreign accent, and the causes and factors that shape second language phonetics. Second language learners and bilinguals typically produce and perceive the sounds of a nonnative language in ways that are different from native speakers. These deviations from native norms can be attributed largely, but not exclusively, to the phonetic system of the native language. Non-nativelike speech perception and production may have both social consequences (e.g., stereotyping) and linguistic–communicative consequences (e.g., reduced intelligibility). Research on second language phonetics over the past ca. 30 years has resulted in a fairly good understanding of causes of nonnative speech production and perception, and these insights have to a large extent been driven by tests of the predictions of models of second language speech learning and of cross-language speech perception. It is generally accepted that the characteristics of second language speech are predominantly due to how second language learners map the sounds of the nonnative to the native language. This mapping cannot be entirely predicted from theoretical or acoustic comparisons of the sound systems of the languages involved, but has to be determined empirically through tests of perceptual assimilation. The most influential learner factors which shape how a second language is perceived and produced are the age of learning and the amount and quality of exposure to the second language. A very important and far-reaching finding from research on second language phonetics is that age effects are not due to neurological maturation which could result in the attrition of phonetic learning ability, but to the way phonetic categories develop as a function of experience with surrounding sound systems.


2021 ◽  
Vol 12 ◽  
Author(s):  
Miquel Llompart

Establishing phonologically robust lexical representations in a second language (L2) is challenging, and even more so for words containing phones in phonological contrasts that are not part of the native language. This study presents a series of additional analyses of lexical decision data assessing the phonolexical encoding of English /ε/ and /æ/ by German learners of English (/æ/ does not exist in German) in order to examine the influence of lexical frequency, phonological neighborhood density and the acoustics of the particular vowels on learners’ ability to reject nonwords differing from real words in the confusable L2 phones only (e.g., *l[æ]mon, *dr[ε]gon). Results showed that both the lexical properties of the target items and the acoustics of the critical vowels affected nonword rejection, albeit differently for items with /æ/ → [ε] and /ε/ → [æ] mispronunciations: For the former, lower lexical frequencies and higher neighborhood densities led to more accurate performance. For the latter, it was only the acoustics of the vowel (i.e., how distinctly [æ]-like the mispronunciation was) that had a significant impact on learners’ accuracy. This suggests that the encoding of /ε/ and /æ/ may not only be asymmetric in that /ε/ is generally more robustly represented in the lexicon than /æ/, as previously reported, but also in the way in which this encoding takes place. Mainly, the encoding of /æ/ appears to be more dependent on the characteristics of the L2 vocabulary and on one’s experience with the L2 than that of its more dominant counterpart (/ε/).


2015 ◽  
Vol 36 (1) ◽  
pp. 115-128 ◽  
Author(s):  
ANNE CUTLER

ABSTRACTOrthographies encode phonological information only at the level of words (chiefly, the information encoded concerns phonetic segments; in some cases, tonal information or default stress may be encoded). Of primary interest to second language (L2) learners is whether orthography can assist in clarifying L2 phonological distinctions that are particularly difficult to perceive (e.g., where one native-language phonemic category captures two L2 categories). A review of spoken-word recognition evidence suggests that orthographic information can install knowledge of such a distinction in lexical representations but that this does not affect learners’ ability to perceive the phonemic distinction in speech. Words containing the difficult phonemes become even harder for L2 listeners to recognize, because perception maps less accurately to lexical content.


Sign in / Sign up

Export Citation Format

Share Document