Human Language Comprehension in Aspect Phrase Extraction with Importance Weighting

Author(s):  
Joschka Kersting ◽  
Michaela Geierhos
2004 ◽  
Vol 8 (5) ◽  
pp. 231-237 ◽  
Author(s):  
Fernanda Ferreira ◽  
Karl G.D. Bailey

2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Phillip M. Alday ◽  
Matthias Schlesewsky ◽  
Ina Bornkessel-Schlesewsky

AbstractIt has been suggested that, during real time language comprehension, the human language processing system attempts to identify the argument primarily responsible for the state of affairs (the “actor”) as quickly and unambiguously as possible. However, previous work on a prominence (e.g. animacy, definiteness, case marking) based heuristic for actor identification has suffered from underspecification of the relationship between different cue hierarchies. Qualitative work has yielded a partial ordering of many features (e.g.: OpenSesame experiment and Python support scripts, sample stimuli, R scripts for analysis


2021 ◽  
Author(s):  
Refael Tikochinski ◽  
Ariel Goldstein ◽  
Yaara Yeshurun ◽  
Uri Hasson ◽  
Roi Reichart

Computational Deep Language Models (DLMs) have been shown to be effective in predicting neural responses during natural language processing. This study introduces a novel computational framework, based on the concept of fine-tuning (Hinton, 2007), for modeling differences in interpretation of narratives based on the listeners' perspective (i.e. their prior knowledge, thoughts, and beliefs). We draw on an fMRI experiment conducted by Yeshurun et al. (2017), in which two groups of listeners were listening to the same narrative but with two different perspectives (cheating versus paranoia). We collected a dedicated dataset of ~3000 stories, and used it to create two modified (fine-tuned) versions of a pre-trained DLM, each representing the perspective of a different group of listeners. Information extracted from each of the two fine-tuned models was better fitted with neural responses of the corresponding group of listeners. Furthermore, we show that the degree of difference between the listeners' interpretation of the story - as measured both neurally and behaviorally - can be approximated using the distances between the representations of the story extracted from these two fine-tuned models. These models-brain associations were expressed in many language-related brain areas, as well as in several higher-order areas related to the default-mode and the mentalizing networks, therefore implying that computational fine-tuning reliably captures relevant aspects of human language comprehension across different levels of cognitive processing.


2016 ◽  
Vol 39 ◽  
Author(s):  
Mary C. Potter

AbstractRapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.


Sign in / Sign up

Export Citation Format

Share Document