Computational Semantics: How to solve the suspense of supersense

Author(s):  
Aishwarya Asesh
2019 ◽  
Vol 57 (2) ◽  
pp. 233
Author(s):  
Nguyen Thu Anh ◽  
Tran Thai Son

The real-world-semantics interpretability concept of fuzzy systems introduced in [1] is new for the both methodology and application and is necessary to meet the demand of establishing a mathematical basis to construct computational semantics of linguistic words so that a method developed based on handling the computational semantics of linguistic terms to simulate a human method immediately handling words can produce outputs similar to the one produced by the human method. As the real world of each application problem having its own structure which is described by certain linguistic expressions, this requirement can be ensured by imposing constraints on the interpretation assigning computational objects in the appropriate computational structure to the words so that the relationships between the computational semantics in the computational structure is the image of relationships between the real-world objects described by the word-expressions. This study will discuss more clearly the concept of real-world-semantics interpretability and point out that such requirement is a challenge to the study of the interpretability of fuzzy systems, especially for approaches within the fuzzy set framework. A methodological challenge is that it requires both the computational expression representing a given linguistic fuzzy rule base and an approximate reasoning method working on this computation expression must also preserve the real-world semantics of the application problem. Fortunately, the hedge algebra (HA) based approach demonstrates the expectation that the graphical representation of the rule of fuzzy systems and the interpolation reasoning method on them are able to preserve the real-world semantics of the real-world counterpart of the given application problem.


Author(s):  
José-Manuel Lopez-Cobo ◽  
Sinuhé Arroyo ◽  
Miguel-Angel Sicilia ◽  
Salvador Sanchez

The evolution of learning technology standards has resulted in a degree of interoperability across systems that enable the interchange of learning contents and activities. Nonetheless, learning resource metadata does not provide formal computational semantics, which hampers the possibilities to develop technology that automates tasks like learning object selection and negotiation. In this paper, the provision of computational semantics to metadata is addressed from the perspective of the concept of Semantic Web service. An architecture based on the specifications of the WSMO project is described, including the definition of an ontology for learning object metadata, and issues of mediation, all under the perspective of the learning object repository as the central entity in learning object reuse scenarios. The resulting framework serves as a foundation for advanced implementations that consider formal metadata semantics as a mechanism for the automation of tasks related to the interchange of learning objects.


Author(s):  
Katrin Erk

Computational semantics performs automatic meaning analysis of natural language. Research in computational semantics designs meaning representations and develops mechanisms for automatically assigning those representations and reasoning over them. Computational semantics is not a single monolithic task but consists of many subtasks, including word sense disambiguation, multi-word expression analysis, semantic role labeling, the construction of sentence semantic structure, coreference resolution, and the automatic induction of semantic information from data. The development of manually constructed resources has been vastly important in driving the field forward. Examples include WordNet, PropBank, FrameNet, VerbNet, and TimeBank. These resources specify the linguistic structures to be targeted in automatic analysis, and they provide high-quality human-generated data that can be used to train machine learning systems. Supervised machine learning based on manually constructed resources is a widely used technique. A second core strand has been the induction of lexical knowledge from text data. For example, words can be represented through the contexts in which they appear (called distributional vectors or embeddings), such that semantically similar words have similar representations. Or semantic relations between words can be inferred from patterns of words that link them. Wide-coverage semantic analysis always needs more data, both lexical knowledge and world knowledge, and automatic induction at least alleviates the problem. Compositionality is a third core theme: the systematic construction of structural meaning representations of larger expressions from the meaning representations of their parts. The representations typically use logics of varying expressivity, which makes them well suited to performing automatic inferences with theorem provers. Manual specification and automatic acquisition of knowledge are closely intertwined. Manually created resources are automatically extended or merged. The automatic induction of semantic information is guided and constrained by manually specified information, which is much more reliable. And for restricted domains, the construction of logical representations is learned from data. It is at the intersection of manual specification and machine learning that some of the current larger questions of computational semantics are located. For instance, should we build general-purpose semantic representations, or is lexical knowledge simply too domain-specific, and would we be better off learning task-specific representations every time? When performing inference, is it more beneficial to have the solid ground of a human-generated ontology, or is it better to reason directly with text snippets for more fine-grained and gradual inference? Do we obtain a better and deeper semantic analysis as we use better and deeper manually specified linguistic knowledge, or is the future in powerful learning paradigms that learn to carry out an entire task from natural language input and output alone, without pre-specified linguistic knowledge?


English Today ◽  
2019 ◽  
Vol 36 (4) ◽  
pp. 33-39
Author(s):  
Yaqian Shi ◽  
Lei Lei

Semantic shifts have been explored via a range of methods (Allan & Robinson 2012). Typically, semantic shifts were usually noted or described with methods such as a literature review or dictionary checking (e.g. Blank & Koch, 1999; Stockwell & Minkova, 2001; Williams, 1976), which are very labour-intensive and time-consuming methods. Other more recently developed methods involve sociolinguistic interviews (Robinson, 2012; Sandow & Robinson, 2018). However, with the development of large-sized corpora and computational semantics, diachronic semantic shifts have started to be captured in a data-driven way (Kutuzov et al., 2018). Recently, the word embeddings technique (Mikolov et al., 2013) has been proven to be a promising tool for the tracking of semantic shifts (e.g. Hamilton, Leskovec & Jurafsky, 2016a, 2016b; Kulkarni et al., 2015; Kutuzov et al., 2017). For example, Hamilton et al. (2016b) exemplified how to use the technique to capture the subjectification process of the word ‘actually’ during the 20th century.


2005 ◽  
Vol 44 (12) ◽  
pp. 2219-2230 ◽  
Author(s):  
M. L. Dalla Chiara ◽  
R. Giuntini ◽  
S. Gudder ◽  
R. Leporini

Sign in / Sign up

Export Citation Format

Share Document