bayesian surprise
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 6)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Thomas Hörberg ◽  
T. Florian Jaeger

A central component of sentence understanding is verb-argument interpretation, determining how the referents in the sentence are related to the events or states expressed by the verb. Previous work has found that comprehenders change their argument interpretations incrementally as the sentence unfolds, based on morphosyntactic (e.g., case, agreement), lexico-semantic (e.g., animacy, verb-argument fit), and discourse cues (e.g., givenness). However, it is still unknown whether these cues have a privileged role in language processing, or whether their effects on argument interpretation originate in implicit expectations based on the joint distribution of these cues with argument assignments experienced in previous language input. We compare the former, linguistic account against the latter, expectation-based account, using data from production and comprehension of transitive clauses in Swedish. Based on a large corpus of Swedish, we develop a rational (Bayesian) model of incremental argument interpretation. This model predicts the processing difficulty experienced at different points in the sentence as a function of the Bayesian surprise associated with changes in expectations over possible argument interpretations. We then test the model against reading times from a self-paced reading experiment on Swedish. We find Bayesian surprise to be a significant predictor of reading times, complementing effects of word surprisal. Bayesian surprise also captures the qualitative effects of morpho-syntactic and lexico-semantic cues. Additional model comparisons find that it—with a single degree of freedom—captures much, if not all, of the effects associated with these cues. This suggests that the effects of form- and meaning-based cues to argument interpretation are mediated through expectation-based processing.


2019 ◽  
Author(s):  
Claire Chambers ◽  
Nidhi Seethapathi ◽  
Rachit Saluja ◽  
Helen Loeb ◽  
Samuel Pierce ◽  
...  

AbstractAn infant’s risk of developing neuromotor impairment is primarily assessed through visual examination by specialized clinicians. Therefore, many infants at risk for impairment go undetected, particularly in under-resourced environments. There is thus a need to develop automated, clinical assessments based on quantitative measures from widely-available sources, such as video cameras. Here, we automatically extract body poses and movement kinematics from the videos of at-risk infants (N=19). For each infant, we calculate how much they deviate from a group of healthy infants (N=85 online videos) using Naïve Gaussian Bayesian Surprise. After pre-registering our Bayesian Surprise calculations, we find that infants that are at higher risk for impairments deviate considerably from the healthy group. Our simple method, provided as an open source toolkit, thus shows promise as the basis for an automated and low-cost assessment of risk based on video recordings.


2019 ◽  
Author(s):  
Lea Musiolek ◽  
Felix Blankenburg ◽  
Dirk Ostwald ◽  
Milena Rabovsky

2017 ◽  
Author(s):  
Shaorong Yan ◽  
Gina R. Kuperberg ◽  
T. Florian Jaeger

AbstractThe extent to which language processing involves prediction of upcoming inputs remains a question of ongoing debate. One important data point comes from DeLong et al. (2005) who reported that an N400-like event-related potential correlated with a probabilistic index of upcoming input. This result is often cited as evidence for gradient probabilistic prediction of form and/or semantics, prior to the bottom-up input becoming available. However, a recent multi-lab study reports a failure to find these effects (Nieuwland et al., 2017). We review the evidence from both studies, including differences in the design and analysis approach between them. Building on over a decade of research on prediction since DeLong et al. (2005)’s original study, we also begin to spell out the computational nature of predictive processes that one might expect to correlate with ERPs that are evoked by a functional element whose form is dependent on an upcoming predicted word. For paradigms with this type of design, we propose an index of anticipatory processing, Bayesian surprise, and apply it to the updating of semantic predictions. We motivate this index both theoretically and empirically. We show that, for studies of the type discussed here, Bayesian surprise can be closely approximated by another, more easily estimated information theoretic index, the surprisal (or Shannon information) of the input. We re-analyze the data from Nieuwland and colleagues using surprisal rather than raw probabilities as an index of prediction. We find that surprisal is gradiently correlated with the amplitude of the N400, even in the data shared by Nieuwland and colleagues. Taken together, our review suggests that the evidence from both studies is compatible with anticipatory semantic processing. We do, however, emphasize the need for future studies to further clarify the nature and degree of form prediction, as well as its neural signatures, during language comprehension.


Sign in / Sign up

Export Citation Format

Share Document