scholarly journals Representing and Learning Grammars in Answer Set Programming

Author(s):  
Mark Law ◽  
Alessandra Russo ◽  
Elisa Bertino ◽  
Krysia Broda ◽  
Jorge Lobo

In this paper we introduce an extension of context-free grammars called answer set grammars (ASGs). These grammars allow annotations on production rules, written in the language of Answer Set Programming (ASP), which can express context-sensitive constraints. We investigate the complexity of various classes of ASG with respect to two decision problems: deciding whether a given string belongs to the language of an ASG and deciding whether the language of an ASG is non-empty. Specifically, we show that the complexity of these decision problems can be lowered by restricting the subset of the ASP language used in the annotations. To aid the applicability of these grammars to computational problems that require context-sensitive parsers for partially known languages, we propose a learning task for inducing the annotations of an ASG. We characterise the complexity of this task and present an algorithm for solving it. An evaluation of a (prototype) implementation is also discussed.

2007 ◽  
Vol 18 (06) ◽  
pp. 1323-1332
Author(s):  
CARLOS MARTIN-VIDE ◽  
VICTOR MITRANA

This work is a continuation of the investigation started in [12], where a new-old type of control on context-free grammars is considered. This type of control is extracted and abstracted from a paper ([2]) with very solid linguistic motivations. The goal of this paper is to complete the picture of path-controlled grammars started in [12] with some mathematical properties which are missing from the aforementioned work: closure and decidability properties, including a polynomial recognition algorithm.


Symmetry ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 1209
Author(s):  
Sherzod Turaev ◽  
Rawad Abdulghafor ◽  
Ali Amer Alwan ◽  
Ali Abd Almisreb ◽  
Yonis Gulzar

A binary grammar is a relational grammar with two nonterminal alphabets, two terminal alphabets, a set of pairs of productions and the pair of the initial nonterminals that generates the binary relation, i.e., the set of pairs of strings over the terminal alphabets. This paper investigates the binary context-free grammars as mutually controlled grammars: two context-free grammars generate strings imposing restrictions on selecting production rules to be applied in derivations. The paper shows that binary context-free grammars can generate matrix languages whereas binary regular and linear grammars have the same power as Chomskyan regular and linear grammars.


2015 ◽  
Vol 15 (4-5) ◽  
pp. 511-525 ◽  
Author(s):  
MARK LAW ◽  
ALESSANDRA RUSSO ◽  
KRYSIA BRODA

AbstractThis paper contributes to the area of inductive logic programming by presenting a new learning framework that allows the learning of weak constraints in Answer Set Programming (ASP). The framework, calledLearning from Ordered Answer Sets, generalises our previous work on learning ASP programs without weak constraints, by considering a new notion of examples asorderedpairs of partial answer sets that exemplify which answer sets of a learned hypothesis (together with a given background knowledge) arepreferredto others. In this new learning task inductive solutions are searched within a hypothesis space of normal rules, choice rules, and hard and weak constraints. We propose a new algorithm, ILASP2, which is sound and complete with respect to our new learning framework. We investigate its applicability to learning preferences in an interview scheduling problem and also demonstrate that when restricted to the task of learning ASP programs without weak constraints, ILASP2 can be much more efficient than our previously proposed system.


2014 ◽  
Vol 50 ◽  
pp. 31-70 ◽  
Author(s):  
Y. Wang ◽  
Y. Zhang ◽  
Y. Zhou ◽  
M. Zhang

The ability of discarding or hiding irrelevant information has been recognized as an important feature for knowledge based systems, including answer set programming. The notion of strong equivalence in answer set programming plays an important role for different problems as it gives rise to a substitution principle and amounts to knowledge equivalence of logic programs. In this paper, we uniformly propose a semantic knowledge forgetting, called HT- and FLP-forgetting, for logic programs under stable model and FLP-stable model semantics, respectively. Our proposed knowledge forgetting discards exactly the knowledge of a logic program which is relevant to forgotten variables. Thus it preserves strong equivalence in the sense that strongly equivalent logic programs will remain strongly equivalent after forgetting the same variables. We show that this semantic forgetting result is always expressible; and we prove a representation theorem stating that the HT- and FLP-forgetting can be precisely characterized by Zhang-Zhou's four forgetting postulates under the HT- and FLP-model semantics, respectively. We also reveal underlying connections between the proposed forgetting and the forgetting of propositional logic, and provide complexity results for decision problems in relation to the forgetting. An application of the proposed forgetting is also considered in a conflict solving scenario.


Triangle ◽  
2018 ◽  
pp. 101
Author(s):  
Benedek Nagy

In this paper we discuss parallel derivations for context-free, contextsensitive and phrase-structure grammars. For regular and linear grammars only sequential derivation can be applied, but a kind of parallelism is present in linear grammars. We show that nite languages can be generated by a recursion-free rule-set. It is well-known that in context-free grammars the derivation can be in maximal (independent) parallel way. We show that in cases of context-sensitive and recursively enumerable languages the parallel branches of the derivation have some synchronization points. In the case of context-sensitive grammars this synchronization can only be local, but in a derivation of an arbitrary grammar we cannot make this restriction. We present a framework to show how the concept of parallelism can be t to the derivations in formal language theory using tokens.


2019 ◽  
Vol 30 (01) ◽  
pp. 73-92
Author(s):  
Zsolt Gazdag ◽  
Krisztián Tichler ◽  
Erzsébet Csuhaj-Varjú

Permitting semi-conditional grammars (pSCGs) are extensions of context-free grammars where each rule is associated with a word [Formula: see text] and such a rule can be applied to a sentential form [Formula: see text] only if [Formula: see text] is a subword of [Formula: see text]. We consider permitting generalized SCGs (pgSCGs) where each rule [Formula: see text] is associated with a set of words [Formula: see text] and [Formula: see text] is applicable only if every word in [Formula: see text] occurs in [Formula: see text]. We investigate the generative power of pgSCGs with no erasing rules and prove a pumping lemma for their languages. Using this lemma we show that pgSCGs are strictly weaker than context-sensitive grammars. This solves a long-lasting open problem concerning the generative power of pSCGs. Moreover, we give a comparison of the generating power of pgSCGs and that of forbidding random context grammars with no erasing rules.


2010 ◽  
Vol 21 (01) ◽  
pp. 1-25
Author(s):  
ETSURO MORIYA ◽  
FRIEDRICH OTTO

The concepts of alternation and of state alternation are extended from context-free grammars to context-sensitive and arbitrary phrase-structure grammars. For the resulting classes of alternating grammars the expressive power is investigated with respect to the leftmost derivation mode and with respect to the unrestricted derivation mode. In particular new grammatical characterizations for the class of languages that are accepted by alternating pushdown automata are obtained in this way.


2021 ◽  
Vol 179 (1) ◽  
pp. 1-33
Author(s):  
Toufik Benouhiba

Probabilistic models play an important role in many fields such as distributed systems and simulations. Like non-probabilistic systems, they can be synthesized using classical refinement-based techniques, but they also require identifying the probability distributions to be used and their parameters. Since a fully automated and blind refinement is generally undecidable, many works tried to synthesize them by looking for the parameters of the distributions. Syntax-guided synthesizing approaches are more powerful, they try to synthesize models structurally by using context-free grammars. However, many problems arise like huge search space, the complexity of generated models, and the limitation of context-free grammars to define constraints over the structure. In this paper, we propose a multi-step refinement approach, based on meta-models, offering several abstraction levels to reduce the size of the search space. More specifically, each refinement step is divided into two stages in which the desired shape of models is first described by context-sensitive constraints. In the second stage, model templates are instantiated by using global optimization techniques. We use our approach to a synthesize a set of optimal probabilistic models and show that context-sensitive constraints coupled with the multi-level abilities of the approach make the synthesis task more effective.


1975 ◽  
Vol 4 (43) ◽  
Author(s):  
Grzegorz Rozenberg ◽  
Arto Salomaa

It is shown that every context-sensitive language can be generated by a context-free grammar with graph control over sets of productions. This can be done in two different ways, corresponding to unconditional transfer programmed grammars and programmed grammars with empty failure fields. Also some results concerning ordinary programmed grammars are established.


2006 ◽  
Vol 9 (2) ◽  
Author(s):  
Rafael García

Pseudoknots are a frequent RNA structure that assumes essential roles for varied biocatalyst cell’s functions. One of the most challenging fields in bioinformatics is the prediction of this secondary structure based on the base-pair sequence that dictates it. Previously, a model adapted from computational linguistics – Stochastic Context Free Grammars (SCFG) – has been used to predict RNA secondary structure. However, to this date the SCFG approach impose a prohibitive complexity cost [O(n4)] when they are applied to the prediction of pseudoknots, mainly because a context-sensitive grammar is formally required to analyze them. Other hybrids approaches (energy maximization) give a O(n3)complexity in the best case, besides having several restrictions in the maximum length of the sequence for practical analysis. Here we introduce a novel algorithm, based on pattern matching techniques, that uses a sequential approximation strategy to solve the original problem. This algorithm not only reduces the complexity to O(n2logn), but also widens the maximum length of the sequence, as well as the capacity of analyzing several pseudoknots simultaneously.


Sign in / Sign up

Export Citation Format

Share Document