scholarly journals Visual and Affective Grounding in Language and Mind

Author(s):  
Simon De Deyne ◽  
Danielle Navarro ◽  
Guillem Collell ◽  
Amy Perfors

One of the main limitations in natural language-based approaches to meaning is that they are not grounded. In this study, we evaluate how well different kinds of models account for people’s representations of both concrete and abstract concepts. The models are both unimodal (language-based only) models and multimodal distributional semantic models (which additionallyincorporate perceptual and/or affective information). The language-based models include both external (based on text corpora) and internal (derived from word associations) language. We present two new studies and a re-analysis of a series of previous studies demonstrating that the unimodal performance is substantially higher for internal models, especially when comparisons at the basiclevel are considered. For multimodal models, our findings suggest that additional visual and affective features lead to only slightly more accurate mental representations of word meaning than what is already encoded in internal language models; however, for abstract concepts, visual andaffective features improve the predictions of external text-based models. Our work presents new evidence that the grounding problem includes abstract words as well and is therefore more widespread than previously suggested. Implications for both embodied and distributional views arediscussed.

2020 ◽  
Author(s):  
Armand Stefan Rotaru ◽  
Gabriella Vigliocco

A number of recent models of semantics combine linguistic information, derived from text corpora, and visual information, derived from image collections, demonstrating that the resulting multimodal models are better than either of their unimodal counterparts, in accounting for behavioural data. Empirical work on semantic processing has shown that emotion also plays an important role especially for abstract concepts, however, models integrating emotion along with linguistic and visual information are lacking. Here, we first improve on visual and affective representations, derived from state-of-the-art existing models, by choosing models that best fit available human semantic data and extending the number of concepts they cover. Crucially then, we assess whether adding affective representations (obtained from a neural network model designed to predict emojis from co-occurring text) improves the model’s ability to fit semantic similarity/relatedness judgements from a purely linguistic and linguistic-visual model. We find that, given specific weights assigned to the models, adding both visual and affective representations improve performance, with visual representations providing an improvement especially for more concrete words, and affective representations improving especially the fit for more abstract words.


2018 ◽  
Vol 373 (1752) ◽  
pp. 20170134 ◽  
Author(s):  
Anna M. Borghi ◽  
Laura Barca ◽  
Ferdinand Binkofski ◽  
Luca Tummolini

The problem of representation of abstract concepts, such as ‘freedom’ and ‘justice’, has become particularly crucial in recent years, owing to the increased success of embodied and grounded views of cognition. We will present a novel view on abstract concepts and abstract words. Since abstract concepts do not have single objects as referents, children and adults might rely more on input from others to learn them; we, therefore, suggest that linguistic and social experience play an important role for abstract concepts. We will discuss evidence obtained in our and other laboratories showing that processing of abstract concepts evokes linguistic interaction and social experiences, leading to the activation of the mouth motor system. We will discuss the possible mechanisms that underlie this activation. Mouth motor system activation can be due to re-enactment of the experience of conceptual acquisition, which occurred through the mediation of language. Alternatively, it could be due to the re-explanation of the word meaning, possibly through inner speech. Finally, it can be due to a metacognitive process revealing low confidence in the meaning of our concepts. This process induces in us the need to rely on others to ask/negotiate conceptual meaning. We conclude that with abstract concepts language works as a social tool: it extends our thinking abilities and pushes us to rely on others to integrate our knowledge. This article is part of the theme issue ‘Varieties of abstract concepts: development, use, and representation in the brain’.


2018 ◽  
Author(s):  
Anna M. Borghi ◽  
Laura Barca ◽  
Ferdinand Binkofski ◽  
Luca Tummolini

The problem of representation of abstract concepts, such as “freedom” and “justice”, has become particularly crucial in recent years, due to the increased success of embodied and grounded views of cognition. We will present a novel view on abstract concepts and abstract words. Since abstract concepts do not have single objects as referents, children and adults might rely more on input from others to learn them; we therefore suggest that linguistic and social experience play an important role for abstract concepts. We will discuss evidence obtained in our and other labs showing that processing of abstract concepts evokes linguistic interaction and social experiences, leading to the activation of the mouth motor system. We will discuss the possible mechanisms that underlie this activation. Mouth motor system activation can be due to re-enactment of the experience of conceptual acquisition, which occurred through the mediation of language. Alternatively, it could be due to the re-explanation of the word meaning, possibly through inner speech. Finally, it can be due to a metacognitive process revealing low confidence in the meaning of our concepts. This process induces in us the need to rely on others to ask/negotiate conceptual meaning. We conclude that with abstract concepts language works as a social tool: it extends our thinking abilities and pushes us to rely on others to integrate our knowledge.


Author(s):  
Fritz Günther ◽  
Marco Alessandro Petilli ◽  
Alessandra Vergallito ◽  
Marco Marelli

AbstractTheories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants’ judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don’t have.


2021 ◽  
Vol 11 (17) ◽  
pp. 8241
Author(s):  
Erhan Sezerer ◽  
Selma Tekir

Over the last few years, there has been an increase in the studies that consider experiential (visual) information by building multi-modal language models and representations. It is shown by several studies that language acquisition in humans starts with learning concrete concepts through images and then continues with learning abstract ideas through the text. In this work, the curriculum learning method is used to teach the model concrete/abstract concepts through images and their corresponding captions to accomplish multi-modal language modeling/representation. We use the BERT and Resnet-152 models on each modality and combine them using attentive pooling to perform pre-training on the newly constructed dataset, which is collected from the Wikimedia Commons based on concrete/abstract words. To show the performance of the proposed model, downstream tasks and ablation studies are performed. The contribution of this work is two-fold: A new dataset is constructed from Wikimedia Commons based on concrete/abstract words, and a new multi-modal pre-training approach based on curriculum learning is proposed. The results show that the proposed multi-modal pre-training approach contributes to the success of the model.


Author(s):  
Simon De Deyne ◽  
Amy Perfors ◽  
Daniel J. Navarro

To represent the meaning of a word, most models use external language resources, such as text corpora, to derive the distributional properties of word usage. In this study, we propose that internal language models, that are more closely aligned to the mental representations of words, can be used to derive new theoretical questions regarding the structure of the mental lexicon. A comparison with internal models also puts into perspective a number of assumptions underlying recently proposed distributional text-based models could provide important insights into cognitive science, including linguistics and artificial intelligence. We focus on word-embedding models which have been proposed to learn aspects of word meaning in a manner similar to humans and contrast them with internal language models derived from a new extensive data set of word associations. An evaluation using relatedness judgments shows that internal language models consistently outperform current state-of-the art text-based external language models. This suggests alternative approaches to represent word meaning using properties that aren't encoded in text.


2020 ◽  
Author(s):  
Fritz Guenther ◽  
Marco Alessandro Petilli ◽  
Alessandra Vergallito ◽  
Marco Marelli

Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants' judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don't have.


2014 ◽  
Author(s):  
Masoud Rouhizadeh ◽  
Emily Prud'hommeaux ◽  
Jan van Santen ◽  
Richard Sproat

2009 ◽  
Vol 21 (11) ◽  
pp. 2154-2171 ◽  
Author(s):  
Anna Mestres-Missé ◽  
Thomas F. Münte ◽  
Antoni Rodriguez-Fornells

The meaning of a novel word can be acquired by extracting it from linguistic context. Here we simulated word learning of new words associated to concrete and abstract concepts in a variant of the human simulation paradigm that provided linguistic context information in order to characterize the brain systems involved. Native speakers of Spanish read pairs of sentences in order to derive the meaning of a new word that appeared in the terminal position of the sentences. fMRI revealed that learning the meaning associated to concrete and abstract new words was qualitatively different and recruited similar brain regions as the processing of real concrete and abstract words. In particular, learning of new concrete words selectively boosted the activation of the ventral anterior fusiform gyrus, a region driven by imageability, which has previously been implicated in the processing of concrete words.


Sign in / Sign up

Export Citation Format

Share Document