scholarly journals Bilingual children show an advantage in controlling verbal interference during spoken language comprehension

2014 ◽  
Vol 18 (3) ◽  
pp. 490-501 ◽  
Author(s):  
ROBERTO FILIPPI ◽  
JOHN MORRIS ◽  
FIONA M. RICHARDSON ◽  
PETER BRIGHT ◽  
MICHAEL S.C. THOMAS ◽  
...  

Studies measuring inhibitory control in the visual modality have shown a bilingual advantage in both children and adults. However, there is a lack of developmental research on inhibitory control in the auditory modality. This study compared the comprehension of active and passive English sentences in 7–10 years old bilingual and monolingual children. The task was to identify the agent of a sentence in the presence of verbal interference. The target sentence was cued by the gender of the speaker. Children were instructed to focus on the sentence in the target voice and ignore the distractor sentence. Results indicate that bilinguals are more accurate than monolinguals in comprehending syntactically complex sentences in the presence of linguistic noise. This supports previous findings with adult participants (Filippi, Leech, Thomas, Green & Dick, 2012). We therefore conclude that the bilingual advantage in interference control begins early in life and is maintained throughout development.

2008 ◽  
Vol 11 (1) ◽  
pp. 81-93 ◽  
Author(s):  
MICHELLE M. MARTIN-RHEE ◽  
ELLEN BIALYSTOK

Previous research has shown that bilingual children excel in tasks requiring inhibitory control to ignore a misleading perceptual cue. The present series of studies extends this finding by identifying the degree and type of inhibitory control for which bilingual children demonstrate this advantage. Study 1 replicated the earlier research by showing that bilingual children perform the Simon task more rapidly than monolinguals, but only on conditions in which the demands for inhibitory control were high. The next two studies compared performance on tasks that required inhibition of attention to a specific cue, like the Simon task, and inhibition of a habitual response, like the day–night Stroop task. In both studies, bilingual children maintained their advantage on tasks that require control of attention but showed no advantage on tasks that required inhibition of response. These results confine the bilingual advantage found previously to complex tasks requiring control over attention to competing cues (interference suppression) and not to tasks requiring control over competing responses (response inhibition).


2012 ◽  
Vol 15 (4) ◽  
pp. 858-872 ◽  
Author(s):  
ROBERTO FILIPPI ◽  
ROBERT LEECH ◽  
MICHAEL S. C. THOMAS ◽  
DAVID W. GREEN ◽  
FREDERIC DICK

This study compared the comprehension of syntactically simple with more complex sentences in Italian–English adult bilinguals and monolingual controls in the presence or absence of sentence-level interference. The task was to identify the agent of the sentence and we primarily examined the accuracy of response. The target sentence was signalled by the gender of the speaker, either a male or a female, and this varied over trials, where the target was spoken in a male voice the distractor was spoken in a female voice and vice versa. In contrast to other work showing a bilingual disadvantage in sentence comprehension under conditions of noise, we show that in this task, where voice permits selection of the target, adult bilingual speakers are in fact better able than their monolingual Italian peers to resist sentence-level interference when comprehension demands are high. Within bilingual speakers we also found that degree of proficiency in English correlated with the ability to resist interference for complex sentences both when the target and distractor were in Italian and when the target was in English and the distractor in Italian.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Candice Frances ◽  
Eugenia Navarra-Barindelli ◽  
Clara D. Martin

AbstractLanguage perception studies on bilinguals often show that words that share form and meaning across languages (cognates) are easier to process than words that share only meaning. This facilitatory phenomenon is known as the cognate effect. Most previous studies have shown this effect visually, whereas the auditory modality as well as the interplay between type of similarity and modality remain largely unexplored. In this study, highly proficient late Spanish–English bilinguals carried out a lexical decision task in their second language, both visually and auditorily. Words had high or low phonological and orthographic similarity, fully crossed. We also included orthographically identical words (perfect cognates). Our results suggest that similarity in the same modality (i.e., orthographic similarity in the visual modality and phonological similarity in the auditory modality) leads to improved signal detection, whereas similarity across modalities hinders it. We provide support for the idea that perfect cognates are a special category within cognates. Results suggest a need for a conceptual and practical separation between types of similarity in cognate studies. The theoretical implication is that the representations of items are active in both modalities of the non-target language during language processing, which needs to be incorporated to our current processing models.


Author(s):  
Aaron Crowson ◽  
Zachary H. Pugh ◽  
Michael Wilkinson ◽  
Christopher B. Mayhorn

The development of head-mounted display virtual reality systems (e.g., Oculus Rift, HTC Vive) has resulted in an increasing need to represent the physical world while immersed in the virtual. Current research has focused on representing static objects in the physical room, but there has been little research into notifying VR users of changes in the environment. This study investigates how different sensory modalities affect noticeability and comprehension of notifications designed to alert head-mounted display users when a person enters his/her area of use. In addition, this study investigates how the use of an orientation type notification aids in perception of alerts that manifest outside a virtual reality users’ visual field. Results of a survey indicated that participants perceived the auditory modality as more effective regardless of notification type. An experiment corroborated these findings for the person notifications; however, the visual modality was in practice more effective for orientation notifications.


2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


1981 ◽  
Vol 24 (3) ◽  
pp. 351-357 ◽  
Author(s):  
Paula Tallal ◽  
Rachel Stark ◽  
Clayton Kallman ◽  
David Mellits

A battery of nonverbal perceptual and memory tests were given to 35 language-impaired (LI) and 38 control subjects. Three modalities of tests were given: auditory, visual, and cross-modal (auditory and visual). The purpose was to reexamine some nonverbal perceptual and memory abilities of LI children as a function of age and modality of stimulation. Results failed to replicate previous findings of a temporal processing deficit that is specific to the auditory modality in LI children. The LI group made significantly more errors than did controls regardless of modality of stimulation when 2-item sequences were presented rapidly, or when more than two stimuli were presented in series. However, further analyses resolved this apparent conflict between the present and earlier studies by demonstrating that age is an important variable underlying modality specificity of perceptual performance in LI children. Whereas younger LI children were equally impaired when responding to stimuli presented rapidly to the auditory and visual modality, older LI subjects made nearly twice as many errors responding to rapidly presented auditory rather than visual stimuli. This developmental difference did not occur for the control group.


2017 ◽  
Vol 21 (3) ◽  
pp. 523-536 ◽  
Author(s):  
SUSANNAH V. LEVI

A bilingual advantage has been found in both cognitive and social tasks. In the current study, we examine whether there is a bilingual advantage in how children process information about who is talking (talker-voice information). Younger and older groups of monolingual and bilingual children completed the following talker-voice tasks with bilingual speakers: a discrimination task in English and German (an unfamiliar language), and a talker-voice learning task in which they learned to identify the voices of three unfamiliar speakers in English. Results revealed effects of age and bilingual status. Across the tasks, older children performed better than younger children and bilingual children performed better than monolingual children. Improved talker-voice processing by the bilingual children suggests that a bilingual advantage exists in a social aspect of speech perception, where the focus is not on processing the linguistic information in the signal, but instead on processing information about who is talking.


2018 ◽  
Vol 38 (4) ◽  
pp. 382-398 ◽  
Author(s):  
Vanessa Diaz ◽  
M. Jeffrey Farrar

Bilingual children often show advanced executive functioning (EF) and false belief (FB) understanding compared to monolinguals. The latter has been attributed to their enhanced inhibitory control EF, although this has only been examined in a single study which did not confirm this hypothesis. The current study examined the relation of EF and language proficiency on FB reasoning in bilingual and monolingual preschoolers to answer two questions: (1) Are there differences in bilinguals’ and monolinguals’ FB, language proficiency, and EF? If so, (2) is there a differential role for language proficiency and EF in predicting FB reasoning in these two groups? Thirty-two Spanish–English bilinguals and 33 English monolinguals (three to five years old) were compared. While monolinguals outperformed bilinguals on language proficiency, after controlling for this, bilinguals outperformed monolinguals on FB reasoning, and marginally on EF. General language ability was related to FB performance in both groups, while short-term memory and inhibitory control predicted FB only for monolinguals.


2018 ◽  
Vol 49 (3) ◽  
pp. 356-378 ◽  
Author(s):  
Genesis D. Arizmendi ◽  
Mary Alt ◽  
Shelley Gray ◽  
Tiffany P. Hogan ◽  
Samuel Green ◽  
...  

Purpose The purpose of this study was to examine differences in performance between monolingual and Spanish–English bilingual second graders (aged 7–9 years old) on executive function tasks assessing inhibition, shifting, and updating to contribute more evidence to the ongoing debate about a potential bilingual executive function advantage. Method One hundred sixty-seven monolingual English-speaking children and 80 Spanish–English bilingual children were administered 7 tasks on a touchscreen computer in the context of a pirate game. Bayesian statistics were used to determine if there were differences between the monolingual and bilingual groups. Additional analyses involving covariates of maternal level of education and nonverbal intelligence, and matching on these same variables, were also completed. Results Scaled-information Bayes factor scores more strongly favored the null hypothesis that there were no differences between the bilingual and monolingual groups on any of the executive function tasks. For 2 of the tasks, we found an advantage in favor of the monolingual group. Conclusions If there is a bilingual advantage in school-aged children, it is not robust across circumstances. We discuss potential factors that might counteract an actual advantage, including task reliability and environmental influences.


Sign in / Sign up

Export Citation Format

Share Document