scholarly journals Phasic pupillary responses reveal differential engagement of attentional control in bilingual spoken language processing

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Anne L. Beatty-Martínez ◽  
Rosa E. Guzzardo Tamargo ◽  
Paola E. Dussias

AbstractLanguage processing is cognitively demanding, requiring attentional resources to efficiently select and extract linguistic information as utterances unfold. Previous research has associated changes in pupil size with increased attentional effort. However, it is unknown whether the behavioral ecology of speakers may differentially affect engagement of attentional resources involved in conversation. For bilinguals, such an act potentially involves competing signals in more than one language and how this competition arises may differ across communicative contexts. We examined changes in pupil size during the comprehension of unilingual and codeswitched speech in a richly-characterized bilingual sample. In a visual-world task, participants saw pairs of objects as they heard instructions to select a target image. Instructions were either unilingual or codeswitched from one language to the other. We found that only bilinguals who use each of their languages in separate communicative contexts and who have high attention ability, show differential attention to unilingual and codeswitched speech. Bilinguals for whom codeswitching is common practice process unilingual and codeswitched speech similarly, regardless of attentional skill. Taken together, these results suggest that bilinguals recruit different language control strategies for distinct communicative purposes. The interactional context of language use critically determines attentional control engagement during language processing.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Isabell Hubert Lyall ◽  
Juhani Järvikivi

AbstractResearch suggests that listeners’ comprehension of spoken language is concurrently affected by linguistic and non-linguistic factors, including individual difference factors. However, there is no systematic research on whether general personality traits affect language processing. We correlated 88 native English-speaking participants’ Big-5 traits with their pupillary responses to spoken sentences that included grammatical errors, "He frequently have burgers for dinner"; semantic anomalies, "Dogs sometimes chase teas"; and statements incongruent with gender stereotyped expectations, such as "I sometimes buy my bras at Hudson's Bay", spoken by a male speaker. Generalized additive mixed models showed that the listener's Openness, Extraversion, Agreeableness, and Neuroticism traits modulated resource allocation to the three different types of unexpected stimuli. No personality trait affected changes in pupil size across the board: less open participants showed greater pupil dilation when processing sentences with grammatical errors; and more introverted listeners showed greater pupil dilation in response to both semantic anomalies and socio-cultural clashes. Our study is the first one demonstrating that personality traits systematically modulate listeners’ online language processing. Our results suggest that individuals with different personality profiles exhibit different patterns of the allocation of cognitive resources during real-time language comprehension.


2010 ◽  
Vol 5 (2) ◽  
pp. 255-276 ◽  
Author(s):  
Yury Shtyrov

A long-standing debate in the science of language is whether our capacity to process language draws on attentional resources, or whether some stages or types of this processing may be automatic. I review a series of experiments in which this issue was addressed by modulating the level of attention on the auditory input while recording event-related brain activity elicited by spoken linguistic stimuli. The overall results of these studies show that the language function does possess a certain degree of automaticity, which seems to apply to different types of information. It can be explained, at least in part, by robustness of strongly connected linguistic memory circuits in the brain that can activate fully even when attentional resources are low. At the same time, this automaticity is limited to the very first stages of linguistic processing (<200 ms from the point in time when the relevant information is available in the auditory input). Later processing steps are, in turn, more affected by attention modulation. These later steps, which possibly reflect a more in-depth, secondary processing or re-analysis and repair of incoming speech, therefore appear dependant on the amount of resources allocated to language. Full processing of spoken language may thus not be possible without allocating attentional resources to it; this allocation in itself may be triggered by the early automatic stages in the first place.


Interpreting ◽  
2017 ◽  
Vol 19 (1) ◽  
pp. 1-20 ◽  
Author(s):  
Ena Hodzik ◽  
John N. Williams

We report a study on prediction in shadowing and simultaneous interpreting (SI), both considered as forms of real-time, ‘online’ spoken language processing. The study comprised two experiments, focusing on: (i) shadowing of German head-final sentences by 20 advanced students of German, all native speakers of English; (ii) SI of the same sentences into English head-initial sentences by 22 advanced students of German, again native English speakers, and also by 11 trainee and practising interpreters. Latency times for input and production of the target verbs were measured. Drawing on studies of prediction in English-language reading production, we examined two cues to prediction in both experiments: contextual constraints (semantic cues in the context) and transitional probability (the statistical likelihood of words occurring together in the language concerned). While context affected prediction during both shadowing and SI, transitional probability appeared to favour prediction during shadowing but not during SI. This suggests that the two cues operate on different levels of language processing in SI.


Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627


2021 ◽  
Vol 118 (46) ◽  
pp. e2104779118
Author(s):  
T. Hannagan ◽  
A. Agrawal ◽  
L. Cohen ◽  
S. Dehaene

The visual word form area (VWFA) is a region of human inferotemporal cortex that emerges at a fixed location in the occipitotemporal cortex during reading acquisition and systematically responds to written words in literate individuals. According to the neuronal recycling hypothesis, this region arises through the repurposing, for letter recognition, of a subpart of the ventral visual pathway initially involved in face and object recognition. Furthermore, according to the biased connectivity hypothesis, its reproducible localization is due to preexisting connections from this subregion to areas involved in spoken-language processing. Here, we evaluate those hypotheses in an explicit computational model. We trained a deep convolutional neural network of the ventral visual pathway, first to categorize pictures and then to recognize written words invariantly for case, font, and size. We show that the model can account for many properties of the VWFA, particularly when a subset of units possesses a biased connectivity to word output units. The network develops a sparse, invariant representation of written words, based on a restricted set of reading-selective units. Their activation mimics several properties of the VWFA, and their lesioning causes a reading-specific deficit. The model predicts that, in literate brains, written words are encoded by a compositional neural code with neurons tuned either to individual letters and their ordinal position relative to word start or word ending or to pairs of letters (bigrams).


2021 ◽  
Vol 12 ◽  
Author(s):  
Jorge Oliveira ◽  
Marta Fernandes ◽  
Pedro J. Rosa ◽  
Pedro Gamito

Research on pupillometry provides an increasing evidence for associations between pupil activity and memory processing. The most consistent finding is related to an increase in pupil size for old items compared with novel items, suggesting that pupil activity is associated with the strength of memory signal. However, the time course of these changes is not completely known, specifically, when items are presented in a running recognition task maximizing interference by requiring the recognition of the most recent items from a sequence of old/new items. The sample comprised 42 healthy participants who performed a visual word recognition task under varying conditions of retention interval. Recognition responses were evaluated using behavioral variables for discrimination accuracy, reaction time, and confidence in recognition decisions. Pupil activity was recorded continuously during the entire experiment. The results suggest a decrease in recognition performance with increasing study-test retention interval. Pupil size decreased across retention intervals, while pupil old/new effects were found only for words recognized at the shortest retention interval. Pupillary responses consisted of a pronounced early pupil constriction at retrieval under longer study-test lags corresponding to weaker memory signals. However, the pupil size was also sensitive to the subjective feeling of familiarity as shown by pupil dilation to false alarms (new items judged as old). These results suggest that the pupil size is related not only to the strength of memory signal but also to subjective familiarity decisions in a continuous recognition memory paradigm.


Author(s):  
Michael K. Tanenhaus

Recently, eye movements have become a widely used response measure for studying spoken language processing in both adults and children, in situations where participants comprehend and generate utterances about a circumscribed “Visual World” while fixation is monitored, typically using a free-view eye-tracker. Psycholinguists now use the Visual World eye-movement method to study both language production and language comprehension, in studies that run the gamut of current topics in language processing. Eye movements are a response measure of choice for addressing many classic questions about spoken language processing in psycholinguistics. This article reviews the burgeoning Visual World literature on language comprehension, highlighting some of the seminal studies and examining how the Visual World approach has contributed new insights to our understanding of spoken word recognition, parsing, reference resolution, and interactive conversation. It considers some of the methodological issues that come to the fore when psycholinguists use eye movements to examine spoken language comprehension.


Sign in / Sign up

Export Citation Format

Share Document