Constructional meaning representation within a knowledge engineering framework

2015 ◽  
Vol 13 (1) ◽  
pp. 1-27
Author(s):  
Ricardo Mairal-Usón

FunGramKB is a multipurpose lexico-conceptual knowledge base for natural language processing systems, and more particularly, for natural language understanding. The linguistic layer of this knowledge-engineering project is grounded in compatible aspects of two linguistic accounts, namely, Role and Reference Grammar (RRG) and the Lexical Constructional Model (LCM). RRG, although originally a lexicalist approach, has recently incorporated constructional configurations into its descriptive and explanatory apparatus. The LCM has sought to understand from its inception the factors that constrain lexical-constructional integration. Within this theoretical context, this paper discusses the format of lexical entries, highly inspired in RRG proposals, and of constructional schemata, which are organized according to the descriptive levels supplied by the LCM. Both lexical and constructional structure is represented by means of Attribute Value Matrices (AVMs). Thus, the lexical and grammatical levels of FunGramKB are the focus of our attention here. Additionally, the need for a conceptualist approach to meaning construction is highlighted throughout our discussion.

2020 ◽  
Vol 15 (1) ◽  
pp. 15
Author(s):  
Ángel M. Felices Lago ◽  
Pedro Ureña Gómez-Moreno

<p>This article describes some phases in the process of constructing a term-based Satellite Ontology within the architecture of the Core Ontology integrated in FunGramKB (a lexico-conceptual knowledge base for the computational processing of natural language). The semantic decomposition of complex terminology is implemented following the COHERENT methodology (a stepwise method for formalizing specialized concepts). For that purpose, we have selected the superordinate concept +DRUG_00 as well as other subordinate concepts in the domain of drugs such as $METHAMPHETAMINE_00, $CANNABIS_00, and $COCAINE_00. The definitions of the concepts selected for the study are based on COREL, an interface metalanguage inspired on some general principles of Role and Reference Grammar (RRG). As a result of the modeling, subsumption and hierarchization process the top conceptual path is represented in the Satellite Ontology as follows: #ENTITY &gt; #PHYSICAL &gt; #OBJECT &gt; SELF_CONNECTED_OBJECT &gt; +ARTIFICIAL_OBJECT_00 &gt; +SUBSTANCE_00 &gt; +SOLID_00&gt; +DRUG_00.</p>


2016 ◽  
Vol 4 ◽  
pp. 51-60
Author(s):  
Rocío Jiménez-Briones

This paper looks at how illocutionary meaning could be accommodated in FunGramKB, a Natural Language Processing environment designed as a multipurpose lexico-conceptual knowledge base for natural language understanding applications. To this purpose, this study concentrates on the Grammaticon, which is the module that stores constructional schemata or machine-tractable representations of linguistic constructions. In particular, the aim of this paper is to discuss how illocutionary constructions such as Can You Forgive Me (XPREP)? have been translated into the metalanguage employed in FunGramKB, namely Conceptual Representation Language (COREL). The formalization of illocutionary constructions presented here builds on previous constructionist approaches, especially on those developed within the usage-based constructionist model known as the Lexical Constructional Model (Ruiz de Mendoza 2013). To illustrate our analysis, we shall focus on the speech act of CONDOLING, which is computationally handled through two related constructional domains, each of which subsumes several illocutionary configurations under one COREL schema.


2019 ◽  
Vol 17 ◽  
pp. 149
Author(s):  
María del Carmen Fumero-Pérez ◽  
Ana Díaz-Galán

ARTEMIS (Automatically Representing Text Meaning via an Interlingua-based System), is a natural language processing device, whose ultimate aim is to be able to understand natural language fragments and arrive at their syntactic and semantic representation. Linguistically, this parser is founded on two solid linguistic theories: the Lexical Constructional Model and Role and Reference Grammar. Although the rich semantic representations and the multilingual character of Role and Reference Grammar make it suitable for natural language understanding tasks, some changes to the model have proved necessary in order to adapt it to the functioning of the ARTEMIS parser. This paper will deal with one of the major modifications that Role and Reference Grammar had to undergo in this process of adaptation, namely, the substitution of the operator projection for feature-based structures, and how this will influence the description of function words in ARTEMIS, since they are strongly responsible for the encoding of the grammatical information which in Role and Reference Grammar is included in the operators. Currently, ARTEMIS is being implemented for the controlled natural language ASD-STE100, the Aerospace and Defence Industries Association of Europe Simplified Technical English, which is an international specification for the preparation of technical documentation in a controlled language. This controlled language is used in the belief that its simplified nature makes it a good corpus to carry out a preliminary testing of the adequacy of the parser. In this line, the aim of this work is to create a catalogue of function words in ARTEMIS for ASD-STE100, and to design the lexical rules necessary to parse the simple sentence and the referential phrase in this controlled language.


2021 ◽  
Vol 11 (7) ◽  
pp. 3095
Author(s):  
Suhyune Son ◽  
Seonjeong Hwang ◽  
Sohyeun Bae ◽  
Soo Jun Park ◽  
Jang-Hwan Choi

Multi-task learning (MTL) approaches are actively used for various natural language processing (NLP) tasks. The Multi-Task Deep Neural Network (MT-DNN) has contributed significantly to improving the performance of natural language understanding (NLU) tasks. However, one drawback is that confusion about the language representation of various tasks arises during the training of the MT-DNN model. Inspired by the internal-transfer weighting of MTL in medical imaging, we introduce a Sequential and Intensive Weighted Language Modeling (SIWLM) scheme. The SIWLM consists of two stages: (1) Sequential weighted learning (SWL), which trains a model to learn entire tasks sequentially and concentrically, and (2) Intensive weighted learning (IWL), which enables the model to focus on the central task. We apply this scheme to the MT-DNN model and call this model the MTDNN-SIWLM. Our model achieves higher performance than the existing reference algorithms on six out of the eight GLUE benchmark tasks. Moreover, our model outperforms MT-DNN by 0.77 on average on the overall task. Finally, we conducted a thorough empirical investigation to determine the optimal weight for each GLUE task.


Author(s):  
TIAN-SHUN YAO

With the word-based theory of natural language processing, a word-based Chinese language understanding system has been developed. In the light of psychological language analysis and the features of the Chinese language, this theory of natural language processing is presented with the description of the computer programs based on it. The heart of the system is to define a Total Information Dictionary and the World Knowledge Source used in the system. The purpose of this research is to develop a system which can understand not only Chinese sentences but also the whole text.


2017 ◽  
Vol 1 (1) ◽  
pp. 61 ◽  
Author(s):  
Ricardo Mairal-Usón ◽  
Francisco Cortés-Rodríguez

Within the framework of FUNK Lab – a virtual laboratory for natural language processing inspired on a functionally-oriented linguistic theory like Role and Reference Grammar-, a number of computational resources have been built dealing with different aspects of language and with an application in different scientific domains, i.e. terminology, lexicography, sentiment analysis, document classification, text analysis, data mining etc. One of these resources is ARTEMIS (<span style="text-decoration: underline;">A</span>utomatically <span style="text-decoration: underline;">R</span>epresenting <span style="text-decoration: underline;">TE</span>xt <span style="text-decoration: underline;">M</span>eaning via an <span style="text-decoration: underline;">I</span>nterlingua-Based <span style="text-decoration: underline;">S</span>ystem), which departs from the pioneering work of Periñán-Pascual (2013) and Periñán-Pascual &amp; Arcas (2014).  This computational tool is a proof of concept prototype which allows the automatic generation of a conceptual logical structure (CLS) (cf. Mairal-Usón, Periñán-Pascual and Pérez 2012; Van Valin and Mairal-Usón 2014), that is, a fully specified semantic representation of an input text on the basis of a reduced sample of sentences. The primary aim of this paper is to develop the syntactic rules that form part of the computational grammar for the representation of simple clauses in English. More specifically, this work focuses on the format of those syntactic rules that account for the upper levels of the RRG Layered Structure of the Clause (LSC), that is, the <em>core</em> (and the level-1 construction associated with it), the <em>clause</em> and the <em>sentence </em>(Van Valin 2005). In essence, this analysis, together with that in Cortés-Rodríguez and Mairal-Usón (2016), offers an almost complete description of the computational grammar behind the LSC for simple clauses.


Author(s):  
Andrew M. Olney ◽  
Natalie K. Person ◽  
Arthur C. Graesser

The authors discuss Guru, a conversational expert ITS. Guru is designed to mimic expert human tutors using advanced applied natural language processing techniques including natural language understanding, knowledge representation, and natural language generation.


Author(s):  
Subhro Roy ◽  
Tim Vieira ◽  
Dan Roth

Little work from the Natural Language Processing community has targeted the role of quantities in Natural Language Understanding. This paper takes some key steps towards facilitating reasoning about quantities expressed in natural language. We investigate two different tasks of numerical reasoning. First, we consider Quantity Entailment, a new task formulated to understand the role of quantities in general textual inference tasks. Second, we consider the problem of automatically understanding and solving elementary school math word problems. In order to address these quantitative reasoning problems we first develop a computational approach which we show to successfully recognize and normalize textual expressions of quantities. We then use these capabilities to further develop algorithms to assist reasoning in the context of the aforementioned tasks.


Sign in / Sign up

Export Citation Format

Share Document