scholarly journals Specifying and Verbalising Answer Set Programs in Controlled Natural Language

2018 ◽  
Vol 18 (3-4) ◽  
pp. 691-705 ◽  
Author(s):  
ROLF SCHWITTER

AbstractWe show how a bi-directional grammar can be used to specify and verbalise answer set programs in controlled natural language. We start from a program specification in controlled natural language and translate this specification automatically into an executable answer set program. The resulting answer set program can be modified following certain naming conventions and the revised version of the program can then be verbalised in the same subset of natural language that was used as specification language. The bi-directional grammar is parametrised for processing and generation, deals with referring expressions, and exploits symmetries in the data structure of the grammar rules whenever these grammar rules need to be duplicated. We demonstrate that verbalisation requires sentence planning in order to aggregate similar structures with the aim to improve the readability of the generated specification. Without modifications, the generated specification is always semantically equivalent to the original one; our bi-directional grammar is the first one that allows for semantic round-tripping in the context of controlled natural language processing.

2013 ◽  
Vol 13 (4-5) ◽  
pp. 487-501 ◽  
Author(s):  
ROLF SCHWITTER

AbstractIn this paper we take on Stuart C. Shapiro's challenge of solving the Jobs Puzzle automatically and do this via controlled natural language processing. Instead of encoding the puzzle in a formal language that might be difficult to use and understand, we employ a controlled natural language as a high-level specification language that adheres closely to the original notation of the puzzle and allows us to reconstruct the puzzle in a machine-processable way and add missing and implicit information to the problem description. We show how the resulting specification can be translated into an answer set program and be processed by a state-of-the-art answer set solver to find the solutions to the puzzle.


Author(s):  
Goran Klepac ◽  
Marko Velić

This chapter covers natural language processing techniques and their application in predicitve models development. Two case studies are presented. First case describes a project where textual descriptions of various situations in call center of one telecommunication company were processed in order to predict churn. Second case describes sentiment analysis of business news and describes practical and testing issues in text mining projects. Both case studies depict different approaches and are implemented in different tools. Language of the texts processed in these projects is Croatian which belongs to the Slavic group of languages with more complex morphologies and grammar rules than English. Chapter concludes with several points on the future research possible in this domain.


2020 ◽  
Vol 27 (1) ◽  
Author(s):  
MK Aregbesola ◽  
RA Ganiyu ◽  
SO Olabiyisi ◽  
EO Omidiora

The concept of automated grammar evaluation of natural language texts is one that has attracted significant interests in the natural language processing community. It is the examination of natural language text for grammatical accuracy using computer software. The current work is a comparative study of different deep and shallow parsing techniques that have been applied to lexical analysis and grammaticality evaluation of natural language texts. The comparative analysis was based on data gathered from numerous related works. Shallow parsing using induced grammars was first examined along with its two main sub-categories, the probabilistic statistical parsers and the connectionist approach using neural networks. Deep parsing using handcrafted grammar was subsequently examined along with several of it‟s subcategories including Transformational Grammars, Feature Based Grammars, Lexical Functional Grammar (LFG), Definite Clause Grammar (DCG), Property Grammar (PG), Categorial Grammar (CG), Generalized Phrase Structure Grammar (GPSG), and Head-driven Phrase Structure Grammar (HPSG). Based on facts gathered from literature on the different aforementioned formalisms, a comparative analysis of the deep and shallow parsing techniques was performed. The comparative analysis showed among other things that while the shallow parsing approach was usually domain dependent, influenced by sentence length and lexical frequency and employed machine learning to induce grammar rules, the deep parsing approach were not domain dependent, not influenced by sentence length nor lexical frequency, and they made use of well spelt out set of precise linguistic rules. The deep parsing techniques proved to be a more labour intensive approach while the induced grammar rules were usually faster and reliability increased with size, accuracy and coverage of training data. The shallow parsing approach has gained immense popularity owing to availability of large corpora for different languages, and has therefore become the most accepted and adopted approach in recent times. Keywords: Grammaticality, Natural language processing, Deep parsing, Shallow parsing, Handcrafted grammar, Precision grammar, Induced grammar, Automated scoring, Computational linguistics, Comparative study.


Author(s):  
Sa Wang ◽  
Hui Xu

In view of the lack of intelligent guidance in online teaching of English composition, this paper proposes an intelligent support system for English writing based on B/S mode. On the basis of vocabulary, grammar rules and other corpus, this system uses Natural Language Processing technology, which combines rule matching and probability statistics, to evaluate and optimize the efficiency of the composition. The empirical results show that the system can effectively improve the teaching direction according to the results of intelligent quantitative analysis.


2015 ◽  
Vol 22 (2) ◽  
pp. 257-290 ◽  
Author(s):  
RAQUEL HERVÁS ◽  
JAVIER ARROYO ◽  
VIRGINIA FRANCISCO ◽  
FEDERICO PEINADO ◽  
PABLO GERVÁS

AbstractVariability is inherent in human language as different people make different choices when facing the same communicative act. In Natural Language Processing, variability is a challenge. It hinders some tasks such as evaluation of generated expressions, while it constitutes an interesting resource to achieve naturalness and to avoid repetitiveness. In this work, we present a methodological approach to study the influence of lexical variability. We apply this approach to TUNA, a corpus of referring expression lexicalizations, in order to study the use of different lexical choices. First, we reannotate the TUNA corpus with new information about lexicalization, and then we analyze this reannotation to study how people lexicalize referring expressions. The results show that people tend to be consistent when generating referring expressions. But at the same time, different people also share certain preferences.


2011 ◽  
Vol 20 (05) ◽  
pp. 955-967 ◽  
Author(s):  
F. JAVIER ORTEGA ◽  
JOSÉ A. TROYANO ◽  
FRANCISCO J. GALÁN ◽  
CARLOS G. VALLEJO ◽  
FERMÍN CRUZ

This paper presents the ideas, experiments and specifications related to the Supervised TextRank (STR) technique, a word tagging method based on the TextRank algorithm. The main innovation of STR technique is the use of a graph-based ranking algorithm similar to PageRank in a supervised fashion, gathering the information needed to build the graph representations of the text from a tagged corpus. We also propose a flexible graph specification language that allows to easily experiment with multiple configurations for the topology of the graph and for the information associated to the nodes and the edges. We have carried experiments in the Part-Of-Speech task, a common tagging problem in Natural Language Processing. In our best result we have achieved a precision of 96.16%, at the same level of the best tagging tools.


2022 ◽  
Vol 29 (1) ◽  
pp. 28-41
Author(s):  
Carolinne Roque e Faria ◽  
Cinthyan S. C. Barbosa

The presence of technologies in the agronomic field has the purpose of proposing the best solutions to the challenges found in agriculture, especially to the problems that affect cultivars. One of the obstacles found is to apply the use of your own language in applications that interact with the user in Brazilian Agribusiness. Therefore, this work uses Natural Language Processing techniques for the development of an automatic and effective computer system to interact with the user and assist in the identification of pests and diseases in soybean crop, stored in a non-relational database repository to provide accurate diagnostics to simplify the work of the farmer and the agricultural stakeholders who deal with a lot of information. In order to build dialogues and provide rich consultations, from agriculture manuals, a data structure with 108 pests and diseases with their information on the soybean cultivar and through the spaCy tool, it was possible to pre-process the texts, recognize the entities and support the requirements for the development of the conversacional system.


Sign in / Sign up

Export Citation Format

Share Document