scholarly journals Context-dependent Semantic Processing in the Human Brain: Evidence from Idiom Comprehension

2013 ◽  
Vol 25 (5) ◽  
pp. 762-776 ◽  
Author(s):  
Joost Rommers ◽  
Ton Dijkstra ◽  
Marcel Bastiaansen

Language comprehension involves activating word meanings and integrating them with the sentence context. This study examined whether these routines are carried out even when they are theoretically unnecessary, namely, in the case of opaque idiomatic expressions, for which the literal word meanings are unrelated to the overall meaning of the expression. Predictable words in sentences were replaced by a semantically related or unrelated word. In literal sentences, this yielded previously established behavioral and electrophysiological signatures of semantic processing: semantic facilitation in lexical decision, a reduced N400 for semantically related relative to unrelated words, and a power increase in the gamma frequency band that was disrupted by semantic violations. However, the same manipulations in idioms yielded none of these effects. Instead, semantic violations elicited a late positivity in idioms. Moreover, gamma band power was lower in correct idioms than in correct literal sentences. It is argued that the brain's semantic expectancy and literal word meaning integration operations can, to some extent, be “switched off” when the context renders them unnecessary. Furthermore, the results lend support to models of idiom comprehension that involve unitary idiom representations.

2018 ◽  
Author(s):  
M. Gareth Gaskell ◽  
Scott Cairney ◽  
Jennifer M Rodd

Evidence is growing for the involvement of consolidation processes in the learning and retention of language, largely based on instances of new linguistic components (e.g., new words). Here, we assessed whether consolidation effects extend to the semantic processing of highly familiar words. The experiments were based on the word-meaning priming paradigm in which a homophone is encountered in a context that biases interpretation towards the subordinate meaning. The homophone is subsequently used in a word-association test to determine whether the priming encounter facilitates the retrieval of the primed meaning. In Experiment 1 (N = 74), we tested the resilience of priming over periods of 2 and 12 hours that were spent awake or asleep, and found that sleep periods were associated with stronger subsequent priming effects. In Experiment 2 (N = 55) we tested whether the sleep benefit could be explained in terms of a lack of retroactive interference by testing participants 24 hours after priming. Participants who had the priming encounter in the evening showed stronger priming effects after 24 hours than participants primed in the morning, suggesting that sleep makes priming resistant to interference during the following day awake. The results suggest that consolidation effects can be found even for highly familiar linguistic materials. We interpret these findings in terms of a contextual binding account in which all language perception provides a learning opportunity, with sleep and consolidation contributing to the updating of our expectations, ready for the next day.


2011 ◽  
Vol 23 (9) ◽  
pp. 2400-2414 ◽  
Author(s):  
Dorothee J. Chwilla ◽  
Daniele Virgillito ◽  
Constance Th. W. M. Vissers

According to embodied theories, the symbols used by language are meaningful because they are grounded in perception, action, and emotion. In contrast, according to abstract symbol theories, meaning arises from the syntactic combination of abstract, amodal symbols. If language is grounded in internal bodily states, then one would predict that emotion affects language. Consistent with this, advocates of embodied theories propose a strong link between emotion and language [Havas, D., Glenberg, A. M., & Rinck, M. Emotion simulation during language comprehension. Psychonomic Bulletin & Review, 14, 436–441, 2007; Niedenthal, P. M. Embodying emotion. Science, 316, 1002–1005, 2007]. The goal of this study was to test abstract symbol vs. embodied views of language by investigating whether mood affects semantic processing. To this aim, we induced different emotional states (happy vs. sad) by presenting film clips that displayed fragments from a happy movie or a sad movie. The clips were presented before and during blocks of sentences in which the cloze probability of mid-sentence critical words varied (high vs. low). Participants read sentences while ERPs were recorded. The mood induction procedure was successful: Participants watching the happy film clips scored higher on a mood scale than those watching the sad clips. For N400, mood by cloze probability interactions were obtained. The N400 cloze effect was strongly reduced in the sad mood compared with the happy mood condition. Furthermore, a difference in late positivity was only present for the sad mood condition. The mood by semantic processing interaction observed for N400 supports embodied theories of meaning and challenges abstract symbol theories that assume that processing of word meaning reflects a modular process.


2006 ◽  
Vol 18 (7) ◽  
pp. 1181-1197 ◽  
Author(s):  
Marieke van Herten ◽  
Dorothee J. Chwilla ◽  
Herman H. J. Kolk

Monitoring refers to a process of quality control designed to optimize behavioral outcome. Monitoring for action errors manifests itself in an error-related negativity in event-related potential (ERP) studies and in an increase in activity of the anterior cingulate in functional magnetic resonance imaging studies. Here we report evidence for a monitoring process in perception, in particular, language perception, manifesting itself in a late positivity in the ERP. This late positivity, the P600, appears to be triggered by a conflict between two interpretations, one delivered by the standard syntactic algorithm and one by a plausibility heuristic which combines individual word meanings in the most plausible way. To resolve this conflict, we propose that the brain reanalyzes the memory trace of the perceptual input to check for the possibility of a processing error. Thus, as in Experiment 1, when the reader is presented with semantically anomalous sentences such as, “The fox that shot the poacher…,” full syntactic analysis indicates a semantic anomaly, whereas the word-based heuristic leads to a plausible interpretation, that of a poacher shooting a fox. That readers actually pursue such a word-based analysis is indicated by the fact that the usual ERP index of semantic anomaly, the so-called N400 effect, was absent in this case. A P600 effect appeared instead. In Experiment 2, we found that even when the word-based heuristic indicated that only part of the sentence was plausible (e.g., “…that the elephants pruned the trees”), a P600 effect was observed and the N400 effect of semantic anomaly was absent. It thus seems that the plausibility of part of the sentence (e.g., that of pruning trees) was sufficient to create a conflict with the implausible meaning of the sentence as a whole, giving rise to a monitoring response.


2019 ◽  
Author(s):  
Jennifer M Rodd

This chapter focuses on the process by which stored knowledge about a word’s form (orthographic or phonological) maps onto stored knowledge about its meaning. This mapping is made challenging by the ambiguity that is ubiquitous in natural language: most familiar words can refer to multiple different concepts. This one-to-many mapping from form to meaning within the lexicon is a core feature of word-meaning access. Fluent, accurate word-meaning access requires that comprehenders integrate multiple cues in order to determine which of a word’s possible semantic features are relevant in the current context. Specifically, word-meaning access is guided by (i) distributional information about the a priori relative likelihoods of different word meanings and (ii) a wide range of contextual cues that indicate which meanings are most likely in the current context.


2019 ◽  
Author(s):  
Lin Wang ◽  
Edward Wlotko ◽  
Edward Alexander ◽  
Lotte Schoot ◽  
Minjae Kim ◽  
...  

AbstractIt has been proposed that people can generate probabilistic predictions at multiple levels of representation during language comprehension. We used Magnetoencephalography (MEG) and Electroencephalography (EEG), in combination with Representational Similarity Analysis (RSA), to seek neural evidence for the prediction of animacy features. In two studies, MEG and EEG activity was measured as human participants (both sexes) read three-sentence scenarios. Verbs in the final sentences constrained for either animate or inanimate semantic features of upcoming nouns, and the broader discourse context constrained for either a specific noun or for multiple nouns belonging to the same animacy category. We quantified the similarity between spatial patterns of brain activity following the verbs until just before the presentation of the nouns. The MEG and EEG datasets revealed converging evidence that the similarity between spatial patterns of neural activity following animate constraining verbs was greater than following inanimate constraining verbs. This effect could not be explained by lexical-semantic processing of the verbs themselves. We therefore suggest that it reflected the inherent difference in the semantic similarity structure of the predicted animate and inanimate nouns. Moreover, the effect was present regardless of whether a specific word could be predicted, providing strong evidence for the prediction of coarse-grained semantic features that goes beyond the prediction of individual words.Significance statementLanguage inputs unfold very quickly during real-time communication. By predicting ahead we can give our brains a “head-start”, so that language comprehension is faster and more efficient. While most contexts do not constrain strongly for a specific word, they do allow us to predict some upcoming information. For example, following the context, “they cautioned the…”, we can predict that the next word will be animate rather than inanimate (we can caution a person, but not an object). Here we used EEG and MEG techniques to show that the brain is able to use these contextual constraints to predict the animacy of upcoming words during sentence comprehension, and that these predictions are associated with specific spatial patterns of neural activity.


2009 ◽  
Vol 20 (5) ◽  
pp. 578-585 ◽  
Author(s):  
Michael C. Frank ◽  
Noah D. Goodman ◽  
Joshua B. Tenenbaum

Word learning is a “chicken and egg” problem. If a child could understand speakers' utterances, it would be easy to learn the meanings of individual words, and once a child knows what many words mean, it is easy to infer speakers' intended meanings. To the beginning learner, however, both individual word meanings and speakers' intentions are unknown. We describe a computational model of word learning that solves these two inference problems in parallel, rather than relying exclusively on either the inferred meanings of utterances or cross-situational word-meaning associations. We tested our model using annotated corpus data and found that it inferred pairings between words and object concepts with higher precision than comparison models. Moreover, as the result of making probabilistic inferences about speakers' intentions, our model explains a variety of behavioral phenomena described in the word-learning literature. These phenomena include mutual exclusivity, one-trial learning, cross-situational learning, the role of words in object individuation, and the use of inferred intentions to disambiguate reference.


Author(s):  
Robert R. Peterson ◽  
Curt Burgess ◽  
Gary S. Dell ◽  
Kathleen M. Eberhard

2015 ◽  
Vol 27 (11) ◽  
pp. 2095-2107 ◽  
Author(s):  
Marcel Bastiaansen ◽  
Peter Hagoort

During sentence level language comprehension, semantic and syntactic unification are functionally distinct operations. Nevertheless, both recruit roughly the same brain areas (spatially overlapping networks in the left frontotemporal cortex) and happen at the same time (in the first few hundred milliseconds after word onset). We tested the hypothesis that semantic and syntactic unification are segregated by means of neuronal synchronization of the functionally relevant networks in different frequency ranges: gamma (40 Hz and up) for semantic unification and lower beta (10–20 Hz) for syntactic unification. EEG power changes were quantified as participants read either correct sentences, syntactically correct though meaningless sentences (syntactic prose), or sentences that did not contain any syntactic structure (random word lists). Other sentences contained either a semantic anomaly or a syntactic violation at a critical word in the sentence. Larger EEG gamma-band power was observed for semantically coherent than for semantically anomalous sentences. Similarly, beta-band power was larger for syntactically correct sentences than for incorrect ones. These results confirm the existence of a functional dissociation in EEG oscillatory dynamics during sentence level language comprehension that is compatible with the notion of a frequency-based segregation of syntactic and semantic unification.


2009 ◽  
Vol 1 (1) ◽  
pp. 45-58 ◽  
Author(s):  
Lawrence J. Taylor ◽  
Rolf A. Zwaan

AbstractEmpirical research has shown that the processing of words and sentences is accompanied by activation of the brain's motor system in language users. The degree of precision observed in this activation seems to be contingent upon (1) the meaning of a linguistic construction and (2) the depth with which readers process that construction. In addition, neurological evidence shows a correspondence between a disruption in the neural correlates of overt action and the disruption of semantic processing of language about action. These converging lines of evidence can be taken to support the hypotheses that motor processes (1) are recruited to understand language that focuses on actions and (2) contribute a unique element to conceptual representation. This article explores the role of this motor recruitment in language comprehension. It concludes that extant findings are consistent with the theorized existence of multimodal, embodied representations of the referents of words and the meaning carried by language. Further, an integrative conceptualization of “fault tolerant comprehension” is proposed.


Sign in / Sign up

Export Citation Format

Share Document