Spazi di parole: metafore e rappresentazioni semantiche

PARADIGMI ◽  
2009 ◽  
pp. 83-100
Author(s):  
Alessandro Lenci

- The aim of this paper is to analyse the analogy of the lexicon with a space defined by words, which is common to a number of computational models of meaning in cognitive science. This can be regarded as a case of constitutive scientific metaphor in the sense of Boyd (1979) and is grounded in the so-called Distributional Hypothesis, stating that the semantic similarity between two words is a function of the similarity of the linguistic contexts in which they typically co-occur. The meaning of words is represented in terms of their topological relations in a high-dimensional space, defined by their combinatorial behaviour in texts. A key consequence of adopting the metaphor of word spaces is that semantic representations are modelled as highly context-sensitive entities. Moreover, word space models promise to open interesting perspectives for the study of metaphorical uses in language, as well as of lexical dynamics in general. Keywords: Cognitive sciences, Computational linguistics, Distributional models of the lexicon, Metaphor, Semantics, Word spaces.

2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Will Bridewell ◽  
Alistair M C Isaac

Abstract This study introduces a novel methodology for consciousness science. Consciousness as we understand it pretheoretically is inherently subjective, yet the data available to science are irreducibly intersubjective. This poses a unique challenge for attempts to investigate consciousness empirically. We meet this challenge by combining two insights. First, we emphasize the role that computational models play in integrating results relevant to consciousness from across the cognitive sciences. This move echoes Alan Newell’s call that the language and concepts of computer science serve as a lingua franca for integrative cognitive science. Second, our central contribution is a new method for validating computational models that treats them as providing negative data on consciousness: data about what consciousness is not. This method is designed to support a quantitative science of consciousness while avoiding metaphysical commitments. We discuss how this methodology applies to current and future research and address questions that others have raised.


Author(s):  
Liqi Liu ◽  
◽  
Qinglin Wang ◽  
Yuan Li

In this paper, an improved long short-term memory (LSTM)-based deep neural network structure is proposed for learning variable-length Chinese sentence semantic similarities. Siamese LSTM, a sequence-insensitive deep neural network model, has a limited ability to capture the semantics of natural language because it has difficulty explaining semantic differences based on the differences in syntactic structures or word order in a sentence. Therefore, the proposed model integrates the syntactic component features of the words in the sentence into a word vector representation layer to express the syntactic structure information of the sentence and the interdependence between words. Moreover, a relative position embedding layer is introduced into the model, and the relative position of the words in the sentence is mapped to a high-dimensional space to capture the local position information of the words. With this model, a parallel structure is used to map two sentences into the same high-dimensional space to obtain a fixed-length sentence vector representation. After aggregation, the sentence similarity is computed in the output layer. Experiments with Chinese sentences show that the model can achieve good results in the calculation of the semantic similarity.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


Impact ◽  
2020 ◽  
Vol 2020 (7) ◽  
pp. 9-11
Author(s):  
Junya Morita

Dr Junya Morita is based at the Applied Cognitive Modelling Laboratory (ACML) within the Department of Behavior Informatics at Shizuoka University in Japan. His team is conducting investigations that use computational models in an effort to improve our understanding of human minds and their inner workings. There are currently two directions of study underway at ACML. The first is concerned with theoretical studies of cognitive modelling, where the team try to construct models that explain human minds as computational and algorithmic levels. The second direction of study is the application of computational cognitive models. Morita and his team believe that there are fundamental values within the basic endeavours of cognitive science and are working to prove these values exist and are valid. Current topics of application include education, driving, entertainment, graphic design, language development, web navigation and mental illness.


2021 ◽  
pp. 1-12
Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

Neural networks can approximate data because of owning many compact non-linear layers. In high-dimensional space, due to the curse of dimensionality, data distribution becomes sparse, causing that it is difficulty to provide sufficient information. Hence, the task becomes even harder if neural networks approximate data in high-dimensional space. To address this issue, according to the Lipschitz condition, the two deviations, i.e., the deviation of the neural networks trained using high-dimensional functions, and the deviation of high-dimensional functions approximation data, are derived. This purpose of doing this is to improve the ability of approximation high-dimensional space using neural networks. Experimental results show that the neural networks trained using high-dimensional functions outperforms that of using data in the capability of approximation data in high-dimensional space. We find that the neural networks trained using high-dimensional functions more suitable for high-dimensional space than that of using data, so that there is no need to retain sufficient data for neural networks training. Our findings suggests that in high-dimensional space, by tuning hidden layers of neural networks, this is hard to have substantial positive effects on improving precision of approximation data.


Author(s):  
Marek Jakubiec

AbstractAlthough much ink has been spilled on different aspects of legal concepts, the approach based on the developments of cognitive science is a still neglected area of study. The “mental” and cognitive aspect of these concepts, i.e., their features as mental constructs and cognitive tools, especially in the light of the developments of the cognitive sciences, is discussed quite rarely. The argument made by this paper is that legal concepts are best understood as mental representations. The piece explains what mental representations are and why this view matters. The explanation of legal concepts, understood as mental representations is one of (at least) three levels of explanation within legal philosophy, but—as will be argued—it is the most fundamental level. This paper analyzes the consequences of such understanding of concepts used in the field of legal philosophy. Special emphasis is put on the current debate on the analogical or amodal nature of concepts.


2001 ◽  
Vol 24 (3) ◽  
pp. 305-320 ◽  
Author(s):  
Benoit Lemaire ◽  
Philippe Dessus

This paper presents Apex, a system that can automatically assess a student essay based on its content. It relies on Latent Semantic Analysis, a tool which is used to represent the meaning of words as vectors in a high-dimensional space. By comparing an essay and the text of a given course on a semantic basis, our system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, the outline and the coherence of the essay. Our experiments yield promising results.


Sign in / Sign up

Export Citation Format

Share Document