incremental interpretation
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 5)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Bingjiang Lyu ◽  
Lorraine K. Tyler ◽  
Yuxing Fang ◽  
William D. Marslen-Wilson

The emergence of AI systems that emulate the remarkable human capacity for language has raised fundamental questions about complex cognition in humans and machines. This lively debate has largely taken place, however, in the absence of specific empirical evidence about how the internal operations of artificial neural networks (ANNs) relate to processes in the human brain as listeners speak and understand language. To directly evaluate these parallels, we extracted multi-level measures of word-by-word sentence interpretation from ANNs, and used Representational Similarity Analysis (RSA) to test these against the representational geometries of real-time brain activity for the same sentences heard by human listeners. These uniquely spatiotemporally specific comparisons reveal deep commonalities in the use of multi-dimensional probabilistic constraints to drive incremental interpretation processes in both humans and machines. But at the same time they demonstrate profound differences in the underlying functional architectures that implement this shared algorithmic alignment.


2020 ◽  
Author(s):  
Yuxing Fang ◽  
Bingjiang Lyu ◽  
Benedict Vassileiou ◽  
Kamen Tsvetanov ◽  
Lorraine Tyler ◽  
...  

2019 ◽  
Vol 116 (42) ◽  
pp. 21318-21327 ◽  
Author(s):  
Bingjiang Lyu ◽  
Hun S. Choi ◽  
William D. Marslen-Wilson ◽  
Alex Clarke ◽  
Billi Randall ◽  
...  

Human speech comprehension is remarkable for its immediacy and rapidity. The listener interprets an incrementally delivered auditory input, millisecond by millisecond as it is heard, in terms of complex multilevel representations of relevant linguistic and nonlinguistic knowledge. Central to this process are the neural computations involved in semantic combination, whereby the meanings of words are combined into more complex representations, as in the combination of a verb and its following direct object (DO) noun (e.g., “eat the apple”). These combinatorial processes form the backbone for incremental interpretation, enabling listeners to integrate the meaning of each word as it is heard into their dynamic interpretation of the current utterance. Focusing on the verb-DO noun relationship in simple spoken sentences, we applied multivariate pattern analysis and computational semantic modeling to source-localized electro/magnetoencephalographic data to map out the specific representational constraints that are constructed as each word is heard, and to determine how these constraints guide the interpretation of subsequent words in the utterance. Comparing context-independent semantic models of the DO noun with contextually constrained noun models reflecting the semantic properties of the preceding verb, we found that only the contextually constrained model showed a significant fit to the brain data. Pattern-based measures of directed connectivity across the left hemisphere language network revealed a continuous information flow among temporal, inferior frontal, and inferior parietal regions, underpinning the verb’s modification of the DO noun’s activated semantics. These results provide a plausible neural substrate for seamless real-time incremental interpretation on the observed millisecond time scales.


2019 ◽  
Vol 9 (17) ◽  
pp. 3522
Author(s):  
Refuoe Mokhosi ◽  
ZhiGuang Qin ◽  
Qiao Liu ◽  
Casper Shikali

Aspect-level sentiment analysis has drawn growing attention in recent years, with higher performance achieved through the attention mechanism. Despite this, previous research does not consider some human psychological evidence relating to language interpretation. This results in attention being paid to less significant words especially when the aspect word is far from the relevant context word or when an important context word is found at the end of a long sentence. We design a novel model using word significance to direct attention towards the most significant words, with novelty decay and incremental interpretation factors working together as an alternative for position based models. The interpretation factor represents the maximization of the degree each new encountered word contributes to the sentiment polarity and a counter balancing stretched exponential novelty decay factor represents decaying human reaction as a sentence gets longer. Our findings support the hypothesis that the attention mechanism needs to be applied to the most significant words for sentiment interpretation and that novelty decay is applicable in aspect-level sentiment analysis with a decay factor β = 0.7 .


2017 ◽  
Vol 27 ◽  
pp. 680
Author(s):  
Stavroula Alexandropoulou ◽  
Jakub Dotlačil ◽  
Rick Nouwen

We investigate the incremental interpretation of comparative and superlative numeral modifiers by manipulating the speaker’s epistemic state in an eye-tracking reading experiment. The results reveal a different processing profile for two types of numeral modifiers. We take this difference to point to a difference in the source and nature of the attested effects (e.g., Quantity- vs. Manner-based pragmatic reasoning). Our findings inform the existing theoretical landscape, invalidating a number of accounts of speaker ignorance effects with numeral modifiers and giving support to Quantity-based accounts of such effects with superlative modifiers.


Author(s):  
Y. Dehbi ◽  
C. Staat ◽  
L. Mandtler ◽  
L. Pl¨umer

Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on fac¸ades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.


Author(s):  
Y. Dehbi ◽  
C. Staat ◽  
L. Mandtler ◽  
L. Pl¨umer

Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on fac¸ades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.


2013 ◽  
Vol 1 ◽  
pp. 111-124
Author(s):  
Federico Sangati ◽  
Frank Keller

In this paper, we present the first incremental parser for Tree Substitution Grammar (TSG). A TSG allows arbitrarily large syntactic fragments to be combined into complete trees; we show how constraints (including lexicalization) can be imposed on the shape of the TSG fragments to enable incremental processing. We propose an efficient Earley-based algorithm for incremental TSG parsing and report an F-score competitive with other incremental parsers. In addition to whole-sentence F-score, we also evaluate the partial trees that the parser constructs for sentence prefixes; partial trees play an important role in incremental interpretation, language modeling, and psycholinguistics. Unlike existing parsers, our incremental TSG parser can generate partial trees that include predictions about the upcoming words in a sentence. We show that it outperforms an n-gram model in predicting more than one upcoming word.


Sign in / Sign up

Export Citation Format

Share Document