The modality switching costs of Chinese–English bilinguals in the processing of L1 and L2

2019 ◽  
Vol 73 (3) ◽  
pp. 396-412
Author(s):  
Tianyang Zhao ◽  
Yanli Huang ◽  
Donggui Chen ◽  
Lu Jiao ◽  
Fernando Marmolejo-Ramos ◽  
...  

Modality switching cost indicates that people’s performance becomes worse when they judge sequential information that is related to different sensory modalities than judging information that is related to the same modality. In this study, we conducted three experiments on proficient and non-proficient bilingual individuals to investigate the modality switching costs in L1 and L2 processing separately. In Experiment 1, materials were L1 and L2 words that were either conceptually related to a visual modality (e.g., light) or related to an auditory modality (e.g., song). The modality switching costs were investigated in a lexical decision task in both L1 and L2. Experiment 2 further explored the modality switching costs while weakening the activation level of the perceptual modality by adding a set of fillers. Experiment 3 used a word-naming task to explore the modality switching effect in language production in L1 and L2. Results of these experiments showed that the modality switching costs appeared in both language comprehension and production in L1 and L2 conditions. The magnitude of the modality switching costs was conditionally modulated by the L2 proficiency level, such as in the L2 condition in Experiment 1 and in both L1 and L2 conditions in Experiment 3. These results suggest that sensorimotor simulation is involved in not only language comprehension but also language production. The sensorimotor simulation that is acquired in L1 can be transferred to L2.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara D. Martin

AbstractLanguage perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


2020 ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara Martin

Abstract Language perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish-English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


2021 ◽  
Author(s):  
Alba Casado ◽  
Jakub M. Szewczyk ◽  
Agata Wolna ◽  
Zofia Wodniecka

After naming pictures in their second language (L2), bilinguals experience difficulty in naming pictures in their native language (L1). The “L2 after-effect” is a lingering consequence of inhibition applied to L1 to facilitate L2 production. We proposed that the amount of L1 inhibition depends on the relative balance between current activation of L1 and L2. In two experiments, bilinguals performed a blocked picture-naming task which provided a measure of the relative balance between the two languages and indexed whole-language inhibition via the magnitude of the L2 after-effect. The higher the activation level of L1 and the lower the activation level of L2, the bigger the L2 after-effect. The results also reveal an enduring down-regulation of L1 activation level in more language-balanced speakers. The outcomes support the main tenets of the inhibitory account of bilingual language production and indicate a high level of dynamics in the language system.


2015 ◽  
Vol 19 (3) ◽  
pp. 533-549 ◽  
Author(s):  
RITA PUREZA ◽  
ANA PAULA SOARES ◽  
MONTSERRAT COMESAÑA

This study explores the role of cognate status, syllable position, and word length in Tip-of-the-Tongue (TOT) states induction and resolution for European Portuguese (EP; L1) – English (L2) bilinguals (and EP monolinguals as control). TOTs were induced using a picture naming task in L1 and L2 followed by a lexical decision task. Here, the first or the last syllable of the target word (or none for control) was embedded in pseudowords (syllabic pseudohomophones) in order to test its effect in TOT resolution. Bilinguals presented more TOTs in L2 than in L1, especially for noncognate words. Longer words showed more TOTs than shorter words, though only in L1. TOT resolution was higher for cognates in L2 and higher when primed by the first than by the last syllable. Finally, longer cognates showed more TOT resolution than shorter cognates, irrespective of the language. Results are discussed in light of TOT's main hypothesis.


2017 ◽  
Vol 20 (4) ◽  
pp. 712-721 ◽  
Author(s):  
IAN CUNNINGS

The primary aim of my target article was to demonstrate how careful consideration of the working memory operations that underlie successful language comprehension is crucial to our understanding of the similarities and differences between native (L1) and non-native (L2) sentence processing. My central claims were that highly proficient L2 speakers construct similarly specified syntactic parses as L1 speakers, and that differences between L1 and L2 processing can be characterised in terms of L2 speakers being more prone to interference during memory retrieval operations. In explaining L1/L2 differences in this way, I argued a primary source of differences between L1 and L2 processing lies in how different populations of speakers weight cues that guide memory retrieval.


Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


2014 ◽  
Vol 18 (3) ◽  
pp. 490-501 ◽  
Author(s):  
ROBERTO FILIPPI ◽  
JOHN MORRIS ◽  
FIONA M. RICHARDSON ◽  
PETER BRIGHT ◽  
MICHAEL S.C. THOMAS ◽  
...  

Studies measuring inhibitory control in the visual modality have shown a bilingual advantage in both children and adults. However, there is a lack of developmental research on inhibitory control in the auditory modality. This study compared the comprehension of active and passive English sentences in 7–10 years old bilingual and monolingual children. The task was to identify the agent of a sentence in the presence of verbal interference. The target sentence was cued by the gender of the speaker. Children were instructed to focus on the sentence in the target voice and ignore the distractor sentence. Results indicate that bilinguals are more accurate than monolinguals in comprehending syntactically complex sentences in the presence of linguistic noise. This supports previous findings with adult participants (Filippi, Leech, Thomas, Green & Dick, 2012). We therefore conclude that the bilingual advantage in interference control begins early in life and is maintained throughout development.


1981 ◽  
Vol 24 (3) ◽  
pp. 351-357 ◽  
Author(s):  
Paula Tallal ◽  
Rachel Stark ◽  
Clayton Kallman ◽  
David Mellits

A battery of nonverbal perceptual and memory tests were given to 35 language-impaired (LI) and 38 control subjects. Three modalities of tests were given: auditory, visual, and cross-modal (auditory and visual). The purpose was to reexamine some nonverbal perceptual and memory abilities of LI children as a function of age and modality of stimulation. Results failed to replicate previous findings of a temporal processing deficit that is specific to the auditory modality in LI children. The LI group made significantly more errors than did controls regardless of modality of stimulation when 2-item sequences were presented rapidly, or when more than two stimuli were presented in series. However, further analyses resolved this apparent conflict between the present and earlier studies by demonstrating that age is an important variable underlying modality specificity of perceptual performance in LI children. Whereas younger LI children were equally impaired when responding to stimuli presented rapidly to the auditory and visual modality, older LI subjects made nearly twice as many errors responding to rapidly presented auditory rather than visual stimuli. This developmental difference did not occur for the control group.


2019 ◽  
Author(s):  
Merel Muylle ◽  
Eva Van Assche ◽  
Robert Hartsuiker

Cognates – words that share form and meaning between languages – are processed faster than control words. However, it is unclear whether this effect is merely lexical (i.e., central) in nature, or whether it cascades to phonological/orthographic (i.e., peripheral) processes. This study compared the cognate effect in spoken and typewritten production, which share central, but not peripheral processes. We inquired whether this effect is present in typewriting, and if so, whether its magnitude is similar to spoken production. Dutch-English bilinguals performed either a spoken or written picture naming task in English; picture names were either Dutch-English cognates or control words. Cognates were named faster than controls and there was no cognate-by-modality interaction. Additionally, there was a similar error pattern in both modalities. These results suggest that common underlying processes are responsible for the cognate effect in spoken and written language production, and thus a central locus of the cognate effect.


Sign in / Sign up

Export Citation Format

Share Document