scholarly journals Agreement errors are predicted by rational inference in sentence processing

2021 ◽  
Author(s):  
Rachel Anna Ryskin ◽  
Leon Bergen ◽  
Edward Gibson

People are able to understand language in challenging settings which often require them to correct for speaker errors, environmental noise, and perceptual unreliability. To account for these abilities, it has recently been proposed that people are adapted to correct for noise during language comprehension, via rational Bayesian inference. In the present research, we demonstrate that a rational noisy-channel framework for sentence comprehension can account for a well-known phenomenon—subject-verb agreement errors (e.g. The key to the cabinets are…). A series of experiments provides evidence that: a) agreement errors are associated with misrepresentations of the sentence consistent with noisy-channel inferences and b) agreement errors are rationally sensitive to environmental statistics and properties of the noise. These findings support the hypothesis that agreement errors in production result in part from a sentence comprehension mechanism that is adapted to understanding language in noisy environments.

2006 ◽  
Vol 96 (6) ◽  
pp. 2830-2839 ◽  
Author(s):  
Arthur Wingfield ◽  
Murray Grossman

Human aging brings with it declines in sensory function, both in vision and in hearing, as well as a general slowing in a variety of perceptual and cognitive operations. Yet in spite of these declines, language comprehension typically remains well preserved in normal aging. We review data from functional magnetic resonance imaging (fMRI) to describe a two-component model of sentence comprehension: a core sentence-processing area located in the perisylvian region of the left cerebral hemisphere and an associated network of brain regions that support the working memory and other resources needed for comprehension of long or syntactically complex sentences. We use this two-component model to describe the nature of compensatory recruitment of novel brain regions observed when healthy older adults show the same success at comprehending sentences as their younger adult counterparts. We suggest that this plasticity in neural recruitment contributes to the stability of language comprehension in the aging brain.


2021 ◽  
Author(s):  
Samuel El Bouzaïdi Tiali ◽  
Anna Borne ◽  
Lucile Meunier ◽  
Monica Bolocan ◽  
Monica Baciu ◽  
...  

Purpose: Canonical sentence structures are the most frequently used in a given language. Less frequent or non-canonical sentences tend to be more challenging to process and to induce a higher cognitive load. To deal with this complexity several authors suggest that not only linguistic but also non-linguistic (domain-general) mechanisms are involved. In this study, we were interested in evaluating the relationship between non-canonical oral sentence comprehension and individual cognitive control abilities.Method: Participants were instructed to perform a sentence-picture verification task with canonical and non-canonical sentences. Sentence structures (i.e., active or passive) and sentence types (i.e., affirmative or negative) were manipulated. Furthermore, each participant performed four cognitive control tasks measuring inhibitory processes, updating in working memory, flexibility and sustained attention. We hypothesized that more complex sentence structures would induce a cognitive cost reflecting involvement of additional processes, and also that this additional cost should be related with cognitive control performances.Results: Results showed better performances for canonical sentences compared to non-canonical ones supporting previous work on passive and negative sentence processing. Correlation results suggest a close relationship between cognitive control mechanisms and non-canonical sentence processing.Conclusion: This study adds evidence for the hypothesis of a domain-general mechanism implication during oral language comprehension and highlights the importance of taking task demands into consideration when exploring language comprehension mechanisms.


2017 ◽  
Author(s):  
Brian Dillon ◽  
Caroline Andrews ◽  
Caren M. Rotello ◽  
Matthew Wagers

One perennially important question for theories of sentence comprehension is whether the human sentence processing mechanism is parallel (i.e. it simultaneously represents multiple syntactic analyses of linguistic input) or serial (i.e. it constructs only a single analysis at a time). Despite its centrality, this question has proven difficult to address for both theoretical and methodological reasons (Gibson & Pearlmutter, 2000; Lewis, 2000). In the present study, we reassess this question from a novel perspective. We investigated the well-known ambiguity advantage effect (Traxler, Pickering & Clifton, 1998) in a speeded acceptability judgment task. We adopted a Signal Detection Theoretic approach to these data, with the goal of determining whether speeded judgment responses were conditioned on one or multiple syntactic analyses. To link these results to incremental parsing models, we developed formal models to quantitatively evaluate how serial and parallel parsing models should impact perceived sentence acceptability in our task. Our results suggest that speeded acceptability judgments are jointly conditioned on multiple parses of the input, a finding that is overall more consistent with parallel parsing models than serial models. Our study thus provides a new, psychophysical argument for co-active parses during language comprehension.


2019 ◽  
Vol 47 (3) ◽  
pp. 695-708
Author(s):  
Benjamin DAVIES ◽  
Nan XU RATTANASONE ◽  
Katherine DEMUTH

AbstractSubject–verb (SV) agreement helps listeners interpret the number condition of ambiguous nouns (The sheep is/are fat), yet it remains unclear whether young children use agreement to comprehend newly encountered nouns. Preschoolers and adults completed a forced choice task where sentences contained singular vs. plural copulas (Where is/are the [novel noun(s)]?). Novel nouns were either morphologically unambiguous (tup/tups) or ambiguous (/geks/ = singular: gex / plural: gecks). Preschoolers (and some adults) ignored the singular copula, interpreting /ks/-final words as plural, raising questions about the role of SV agreement in learners’ sentence comprehension and the status of is in Australian English.


2021 ◽  
Author(s):  
Nina Zdorova ◽  
Svetlana Malyutina ◽  
Anna Laurinavichyute ◽  
Anastasiia Kaprielova ◽  
Kromina Anastasia ◽  
...  

Noise, as part of real-life communication flow, degrades the quality of linguistic input and affects language processing. According to predictions of the noisy-channel model, noisemakes comprehenders rely more on word-level semantics and good-enough processing instead of actual syntactic relations. However, empirical evidence of such qualitative effect of noise on sentence processing is still lacking. For the first time, we investigated the qualitative effect of both auditory (three-talker babble) and visual (short idioms appearing next to target sentence on the screen) noise on sentence reading within one study in two eye-trackingexperiments. In both of them, we used the same stimuli — unambiguous grammatical Russian sentences — and manipulated their semantic plausibility. Our findings suggest that although readers relied on good-enough processing in Russian, neither auditory nor visualnoise qualitatively increased reliance on semantics in sentence comprehension. The only effect of noise was found in reading speed: only without noise, semantically implausible sentences were read slower than semantically plausible ones, as measured by both early and late eye-movement measures. These results do not support the predictions of the noisy-channel model. With regard to quantitative effects, we found a detrimental effect ofauditory noise on overall comprehension accuracy, and an accelerating effect of visual noise on sentence processing without accuracy decrease.


Author(s):  
Margreet Vogelzang ◽  
Christiane M. Thiel ◽  
Stephanie Rosemann ◽  
Jochem W. Rieger ◽  
Esther Ruigendijk

Purpose Adults with mild-to-moderate age-related hearing loss typically exhibit issues with speech understanding, but their processing of syntactically complex sentences is not well understood. We test the hypothesis that listeners with hearing loss' difficulties with comprehension and processing of syntactically complex sentences are due to the processing of degraded input interfering with the successful processing of complex sentences. Method We performed a neuroimaging study with a sentence comprehension task, varying sentence complexity (through subject–object order and verb–arguments order) and cognitive demands (presence or absence of a secondary task) within subjects. Groups of older subjects with hearing loss ( n = 20) and age-matched normal-hearing controls ( n = 20) were tested. Results The comprehension data show effects of syntactic complexity and hearing ability, with normal-hearing controls outperforming listeners with hearing loss, seemingly more so on syntactically complex sentences. The secondary task did not influence off-line comprehension. The imaging data show effects of group, sentence complexity, and task, with listeners with hearing loss showing decreased activation in typical speech processing areas, such as the inferior frontal gyrus and superior temporal gyrus. No interactions between group, sentence complexity, and task were found in the neuroimaging data. Conclusions The results suggest that listeners with hearing loss process speech differently from their normal-hearing peers, possibly due to the increased demands of processing degraded auditory input. Increased cognitive demands by means of a secondary visual shape processing task influence neural sentence processing, but no evidence was found that it does so in a different way for listeners with hearing loss and normal-hearing listeners.


Author(s):  
Hiroki Fujita ◽  
Ian Cunnings

Abstract We report two offline and two eye-movement experiments examining non-native (L2) sentence processing during and after reanalysis of temporarily ambiguous sentences like “While Mary dressed the baby laughed happily”. Such sentences cause reanalysis at the main clause verb (“laughed”), as the temporarily ambiguous noun phrase (“the baby”) may initially be misanalysed as the direct object of the subordinate clause verb (“dressed”). The offline experiments revealed that L2ers have difficulty reanalysing temporarily ambiguous sentences with a greater persistence of the initially assigned misinterpretation than native (L1) speakers. In the eye-movement experiments, we found that L2ers complete reanalysis similarly to L1ers but fail to fully erase the memory trace of the initially assigned interpretation. Our results suggested that the source of L2 reanalysis difficulty is a failure to erase the initially assigned misinterpretation from memory rather than a failure to conduct syntactic reanalysis.


2021 ◽  
pp. 026765832199790
Author(s):  
Anna Chrabaszcz ◽  
Elena Onischik ◽  
Olga Dragoy

This study examines the role of cross-linguistic transfer versus general processing strategy in two groups of heritage speakers ( n = 28 per group) with the same heritage language – Russian – and typologically different dominant languages: English and Estonian. A group of homeland Russian speakers ( n = 36) is tested to provide baseline comparison. Within the framework of the Competition model (MacWhinney, 2012), cross-linguistic transfer is defined as reliance on the processing cue prevalent in the heritage speaker’s dominant language (e.g. word order in English) for comprehension of heritage language. In accordance with the Isomorphic Mapping Hypothesis (O’Grady and Lee, 2005), the general processing strategy is defined in terms of isomorphism as a linear alignment between the order of the sentence constituents and the temporal sequence of events. Participants were asked to match pictures on the computer screen with auditorily presented sentences. Sentences included locative or instrumental constructions, in which two cues – word order (basic vs. inverted) and isomorphism mapping (isomorphic vs. nonisomorphic) – were fully crossed. The results revealed that (1) Russian native speakers are sensitive to isomorphism in sentence processing; (2) English-dominant heritage speakers experience dominant language transfer, as evidenced by their reliance primarily on the word order cue; (3) Estonian-dominant heritage speakers do not show significant effects of isomorphism or word order but experience significant processing costs in all conditions.


2017 ◽  
Vol 20 (4) ◽  
pp. 712-721 ◽  
Author(s):  
IAN CUNNINGS

The primary aim of my target article was to demonstrate how careful consideration of the working memory operations that underlie successful language comprehension is crucial to our understanding of the similarities and differences between native (L1) and non-native (L2) sentence processing. My central claims were that highly proficient L2 speakers construct similarly specified syntactic parses as L1 speakers, and that differences between L1 and L2 processing can be characterised in terms of L2 speakers being more prone to interference during memory retrieval operations. In explaining L1/L2 differences in this way, I argued a primary source of differences between L1 and L2 processing lies in how different populations of speakers weight cues that guide memory retrieval.


2010 ◽  
Vol 31 (3) ◽  
pp. 551-569 ◽  
Author(s):  
YUKI YOSHIMURA ◽  
BRIAN MACWHINNEY

ABSTRACTCase marking is the major cue to sentence interpretation in Japanese, whereas animacy and word order are much weaker. However, when subjects and their cases markers are omitted, Japanese honorific and humble verbs can provide information that compensates for the missing case role markers. This study examined the usage of honorific and humble verbs as cues to case role assignment by Japanese native speakers and second-language learners of Japanese. The results for native speakers replicated earlier findings regarding the predominant strength of case marking. However, when case marking was missing, native speakers relied more on honorific marking than word order. In these sentences, the processing that relied on the honorific cue was delayed by about 100 ms in comparison to processing that relied on the case-marking cue. Learners made extensive use of the honorific agreement cue, but their use of the cue was much less accurate than that of native speakers. In particular, they failed to systematically invoke the agreement cue when case marking was missing. Overall, the findings support the predictions of the model and extend its coverage to a new type of culturally determined cue.


Sign in / Sign up

Export Citation Format

Share Document