scholarly journals A novel framework for synthesizing nested queries in SQL from business requirements language

Author(s):  
Mathew George, Et. al.

Different methods and systems were proposed in the past for translating Natural Language (NL) statements in to Structured Query Language (SQL) queries. Translating statements resultingin‘nested’queries havealways been a challenge and was not effectively handled. This work proposes a framework for translating requirement statementsresulting inthe construction of nested Queries. While translating nested scenarios; often thereis a need to create sub-queriesthat execute inpipeline orin parallel or both operating together.Lambda Calculus is found to be effective in representing the intermediate expressions and helps in performing the transformations that are needed in translating specific predicates into SQL, but its inflexibility in combining parallel computations is a constraint. To represent clauses that are in parallel or arein pipeline,and to perform the required transformationson theintermediate expressions involving these,more advancedprogramming constructs are needed.This work recommends the use of advanced language constructs and adoptsfunctional programming techniques for performing the required transformation at the intermediate language level.

2021 ◽  
Vol 69 (08) ◽  
pp. 32-37
Author(s):  
Turan Şahin qızı Kərimbəyli ◽  

Today our main goal is to use authentic texts, including the study of the peculiarities of intercultural communication in the environment of communicative teaching of foreign languages. Authentic text reflects the use of natural language. It should be noted that the teaching of authentic texts in teaching a foreign language should be determined by the language level of the students. The selection criteria for authentic texts in German differ depending on the language level of the students. Key words: authentic texts, intercultural communication, communicative learning


Author(s):  
Steven Noel ◽  
Stephen Purdy ◽  
Annie O’Rourke ◽  
Edward Overly ◽  
Brianna Chen ◽  
...  

This paper describes the Cyber Situational Understanding (Cyber SU) Proof of Concept (CySUP) software system for exploring advanced Cyber SU capabilities. CySUP distills complex interrelationships among cyberspace entities to provide the “so what” of cyber events for tactical operations. It combines a variety of software components to build an end-to-end pipeline for live data ingest that populates a graph knowledge base, with query-driven exploratory analysis and interactive visualizations. CySUP integrates with the core infrastructure environment supporting command posts to provide a cyber overlay onto a common operating picture oriented to tactical commanders. It also supports detailed analysis of cyberspace entities and relationships driven by ad hoc graph queries, including the conversion of natural language inquiries to formal query language. To help assess its Cyber SU capabilities, CySUP leverages automated cyber adversary emulation to carry out controlled cyberattack campaigns that impact elements of tactical missions.


Author(s):  
Xiaohan Guan ◽  
Jianhui Han ◽  
Zhi Liu ◽  
Mengmeng Zhang

Many tasks of natural language processing such as information retrieval, intelligent question answering, and machine translation require the calculation of sentence similarity. The traditional calculation methods used in the past could not solve semantic understanding problems well. First, the model structure based on Siamese lack of interaction between sentences; second, it has matching problem which contains lacking position information and only using partial matching factor based on the matching model. In this paper, a combination of word and word’s dependence is proposed to calculate the sentence similarity. This combination can extract the word features and word’s dependency features. To extract more matching features, a bi-directional multi-interaction matching sequence model is proposed by using word2vec and dependency2vec. This model obtains matching features by convolving and pooling the word-granularity (word vector, dependency vector) interaction sequences in two directions. Next, the model aggregates the bi-direction matching features. The paper evaluates the model on two tasks: paraphrase identification and natural language inference. The experimental results show that the combination of word and word’s dependence can enhance the ability of extracting matching features between two sentences. The results also show that the model with dependency can achieve higher accuracy than these models without using dependency.


1995 ◽  
Vol 107-108 ◽  
pp. 89-111
Author(s):  
Jan Daugaard ◽  
Sabine Kirchmeier-Andersen ◽  
Lene Schøsler

Abstract The above research team has for the past 4 years been working on a database of valency schemes for 4,000 Danish verbs. First we present the underlying theoretical assumptions for the creation of valency schemes. Then the tools to perform automatic extraction of valency information from corpora are described. Finally, the results are presented. Keywords: natural language parsing, Danish, lexical valency, the Pronominal Approach, corpus analysis.


Author(s):  
Linda Huber ◽  
Rebecca Williamson

Machine-learning (ML) enabled products and features allow for an expanded set of user inputs and system outputs. In the case of consumer-facing ML-enabled products, examples of this include the ability to take natural language as an input, or to generate personalized feeds or recommendations (system output) based on the behavior of a user. Based on our experience conducting UX research on ML-enabled products, we propose that this expanded set of inputs and outputs requires that we modify our usual methodologies in a few ways, depending on the product & research objectives. Specifically, we propose that researching ML- enabled products may require 1) more time for the user to explore or experiment with the product 2) talking to more user types and/or 3) more comprehensive prototypes or presentation of the product concept. To flesh this framework out, we present three examples of ML-enabled products we tested in the past few years and the methodological modifications required.


2019 ◽  
Vol 34 (1) ◽  
pp. 1-37
Author(s):  
Grant Armstrong

Abstract In many languages a set of adjectives are characterized by their “past/passive” participial morphology. Lexicalist and syntactic approaches to word formation converge on the claim that such adjectives can be derived from verbal inputs with no external argument but never from verbal inputs with an external argument. That is, there are “adjectival passives” but no “adjectival antipassives” marked with the same morphology. I argue that a sub-class of adjectives marked with the “past/passive” participial morpheme –do in Spanish, labeled participios activos in descriptive grammars, should be treated as adjectival antipassives in precisely this sense. I propose that Spanish has an Asp head that (i) is spelled out with “past/passive” participial morphology and (ii) selects an unergative verbal input creating a state/property whose argument corresponds to the external argument of that verbal source. If on the right track, the proposal supports the existence of a typology of adjectivizing heads that are spelled out uniformly with “past/passive” participial morphology but must be distinguished in terms of selectional and semantic properties (Bruening 2014, Word formation is syntactic: Adjectival passives in English. Natural Language and Linguistic Theory 32. 363–422; Embick 2004, On the structure of resultative participles in English. Linguistic Inquiry 35. 355–392). It differs from previous approaches in claiming that such a typology must include root-derived adjectives, as well as ‘active (=unergative)’ and ‘passive’ deverbal adjectives.


2018 ◽  
Vol 61 ◽  
pp. 65-170 ◽  
Author(s):  
Albert Gatt ◽  
Emiel Krahmer

This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past two decades, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of NLP, with an emphasis on different evaluation methods and the relationships between them.


Author(s):  
Marco Eilers ◽  
Severin Meier ◽  
Peter Müller

AbstractMost existing program verifiers check trace properties such as functional correctness, but do not support the verification of hyperproperties, in particular, information flow security. In principle, product programs allow one to reduce the verification of hyperproperties to trace properties and, thus, apply standard verifiers to check them; in practice, product constructions are usually defined only for simple programming languages without features like dynamic method binding or concurrency and, consequently, cannot be directly applied to verify information flow security in a full-fledged language. However, many existing verifiers encode programs from source languages into simple intermediate verification languages, which opens up the possibility of constructing a product program on the intermediate language level, reusing the existing encoding and drastically reducing the effort required to develop new verification tools for information flow security. In this paper, we explore the potential of this approach along three dimensions: (1) Soundness: We show that the combination of an encoding and a product construction that are individually sound can still be unsound, and identify a novel condition on the encoding that ensures overall soundness. (2) Concurrency: We show how sequential product programs on the intermediate language level can be used to verify information flow security of concurrent source programs. (3) Performance: We implement a product construction in Nagini, a Python verifier built upon the Viper intermediate language, and evaluate it on a number of challenging examples. We show that the resulting tool offers acceptable performance, while matching or surpassing existing tools in its combination of language feature support and expressiveness.


Semantic Web ◽  
2021 ◽  
pp. 1-17
Author(s):  
Lucia Siciliani ◽  
Pierpaolo Basile ◽  
Pasquale Lops ◽  
Giovanni Semeraro

Question Answering (QA) over Knowledge Graphs (KG) aims to develop a system that is capable of answering users’ questions using the information coming from one or multiple Knowledge Graphs, like DBpedia, Wikidata, and so on. Question Answering systems need to translate the user’s question, written using natural language, into a query formulated through a specific data query language that is compliant with the underlying KG. This translation process is already non-trivial when trying to answer simple questions that involve a single triple pattern. It becomes even more troublesome when trying to cope with questions that require modifiers in the final query, i.e., aggregate functions, query forms, and so on. The attention over this last aspect is growing but has never been thoroughly addressed by the existing literature. Starting from the latest advances in this field, we want to further step in this direction. This work aims to provide a publicly available dataset designed for evaluating the performance of a QA system in translating articulated questions into a specific data query language. This dataset has also been used to evaluate three QA systems available at the state of the art.


Sign in / Sign up

Export Citation Format

Share Document