The role of word meaning in syntax

Author(s):  
Stephen Wechsler
Keyword(s):  
Corpora ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. 379-416
Author(s):  
Tatyana Karpenko-Seccombe

This paper considers the role of historical context in initiating shifts in word meaning. The study focusses on two words – the translation equivalents separatist and separatism – in the discourses of Russian and Ukrainian parliamentary debates before and during the Russian–Ukrainian conflict which emerged at the beginning of 2014. The paper employs a cross-linguistic corpus-assisted discourse analysis to investigate the way wider socio-political context affects word usage and meaning. To allow a comparison of discourses around separatism between two parliaments, four corpora were compiled covering the debates in both parliaments before and during the conflict. Keywords, collocations and n-grams were studied and compared, and this was followed by qualitative analysis of concordance lines, co-text and the larger context in which these words occurred. The results show how originally close meanings of translation equivalents began to diverge and manifest noticeable changes in their connotative, affective and, to an extent, denotative meanings at a time of conflict in line with the dominant ideologies of the parliaments as well as the political affiliations of individuals.


2009 ◽  
Vol 20 (5) ◽  
pp. 578-585 ◽  
Author(s):  
Michael C. Frank ◽  
Noah D. Goodman ◽  
Joshua B. Tenenbaum

Word learning is a “chicken and egg” problem. If a child could understand speakers' utterances, it would be easy to learn the meanings of individual words, and once a child knows what many words mean, it is easy to infer speakers' intended meanings. To the beginning learner, however, both individual word meanings and speakers' intentions are unknown. We describe a computational model of word learning that solves these two inference problems in parallel, rather than relying exclusively on either the inferred meanings of utterances or cross-situational word-meaning associations. We tested our model using annotated corpus data and found that it inferred pairings between words and object concepts with higher precision than comparison models. Moreover, as the result of making probabilistic inferences about speakers' intentions, our model explains a variety of behavioral phenomena described in the word-learning literature. These phenomena include mutual exclusivity, one-trial learning, cross-situational learning, the role of words in object individuation, and the use of inferred intentions to disambiguate reference.


1994 ◽  
Vol 9 (1) ◽  
pp. 45-75 ◽  
Author(s):  
Mutsumi Imai ◽  
Dedre Gentner ◽  
Nobuko Uchida

2010 ◽  
Vol 5 (2) ◽  
pp. 231-254 ◽  
Author(s):  
Véronique Boulenger ◽  
Tatjana A. Nazir

Theories of embodied cognition consider language understanding as intimately linked to sensory and motor processes. Here we review evidence from kinematic and electrophysiological studies for the idea that processing of words referring to bodily actions, even when subliminally presented, recruits the same motor regions that are involved in motor control. We further discuss the functional role of the motor system in action word retrieval in light of neuropsychological data showing modulation of masked priming effects for action verbs in Parkinson’s patients as a function of dopaminergic treatment. Finally, a neuroimaging study revealing semantic somatotopy in the motor cortex during reading of idioms that include action words is presented. Altogether these findings provide strong arguments that semantic mechanisms are grounded in action-perception systems of the brain. They support the existence of common brain signatures to action words, even when embedded in idiomatic sentences, and motor action. They further suggest that motor schemata reflecting word meaning contribute to lexico-semantic retrieval of action words.


PLoS ONE ◽  
2021 ◽  
Vol 16 (3) ◽  
pp. e0248388
Author(s):  
Les Sikos ◽  
Noortje J. Venhuizen ◽  
Heiner Drenhaus ◽  
Matthew W. Crocker

The results of a highly influential study that tested the predictions of the Rational Speech Act (RSA) model suggest that (a) listeners use pragmatic reasoning in one-shot web-based referential communication games despite the artificial, highly constrained, and minimally interactive nature of the task, and (b) that RSA accurately captures this behavior. In this work, we reevaluate the contribution of the pragmatic reasoning formalized by RSA in explaining listener behavior by comparing RSA to a baseline literal listener model that is only driven by literal word meaning and the prior probability of referring to an object. Across three experiments we observe only modest evidence of pragmatic behavior in one-shot web-based language games, and only under very limited circumstances. We find that although RSA provides a strong fit to listener responses, it does not perform better than the baseline literal listener model. Our results suggest that while participants playing the role of the Speaker are informative in these one-shot web-based reference games, participants playing the role of the Listener only rarely take this Speaker behavior into account to reason about the intended referent. In addition, we show that RSA’s fit is primarily due to a combination of non-pragmatic factors, perhaps the most surprising of which is that in the majority of conditions that are amenable to pragmatic reasoning, RSA (accurately) predicts that listeners will behave non-pragmatically. This leads us to conclude that RSA’s strong overall correlation with human behavior in one-shot web-based language games does not reflect listener’s pragmatic reasoning about informative speakers.


1984 ◽  
Vol 11 (3) ◽  
pp. 645-664 ◽  
Author(s):  
Elaine S. Andersen ◽  
Anne Dunlea ◽  
Linda S. Kekelis

ABSTRACTAlthough the role of visual perception is central to many theories of language development, researchers have disagreed sharply on the effects of blindness on the acquisition process: some claim major differences between blind and sighted children; others find great similarities. With audio-and video-recorded longitudinal data from six children (with varying degrees of vision) aged 0; 9–3; 4, we show that there ARE basic differences in early language, which appear to reflect differences in cognitive development. We focus here on early lexical acquisition and on verbal role-play, demonstrating how previous analyses have failed to observe aspects of the blind child's language system because language was considered out of the context of use. While a comparison of early vocabularies does suggest surface similarities, we found that when sighted peers are actively forming hypotheses about word meanings, totally blind children are acquiring largely unanalysed ‘labels’. They are slow to extend words and rarely overextended any. Similarly, although verbal role-play appears early, attempts to incorporate this kind of language into conversations with others reveal clear problems with reversibility – specifically, the ability to understand the role of shifting perspectives in determining word meaning. Examination of language in context suggests that blind children have difficulties in just those areas of language acquisition where visual information can provide input about the world and be a stimulus for forming hypotheses about pertinent aspects of the linguistic system.


2020 ◽  
Vol 10 (11) ◽  
pp. 810
Author(s):  
Stanley Shen ◽  
Jess R. Kerlin ◽  
Heather Bortfeld ◽  
Antoine J. Shahin

The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280–527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal’s high-reliability.


2006 ◽  
Vol 22 (2) ◽  
pp. 219-237 ◽  
Author(s):  
Istvan Kecskes

This article discusses three claims of the Graded Salience Hypothesis presented in Rachel Giora’s book On our mind. It is argued that these claims may give second language researchers the chance to revise the way they think about word meaning, the literal meaning-figurative meaning dichotomy and the role of context in language processing. Giora’s arguments are related to recent second language research and their relevance is explained through examples. There are also several suggestions made for further research.


Sign in / Sign up

Export Citation Format

Share Document