scholarly journals Verification of Imperative Programs through Transformation of Constraint Logic Programs

10.29007/163x ◽  
2018 ◽  
Author(s):  
Emanuele De Angelis ◽  
Fabio Fioravanti ◽  
Alberto Pettorossi ◽  
Maurizio Proietti

We present a method for verifying partial correctness properties of imperative programs by using techniques based on the transformation of constraint logic programs (CLP). We consider: (i) imperative programs that manipulate integers and arrays, and (ii) first order logic properties that define <i>configurations</i> of program executions. We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say Prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of Prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact `incorrect.' (and in this case Prog is incorrect), or by (ii) the empty set of clauses (and in this case Prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.

10.29007/dkxs ◽  
2018 ◽  
Author(s):  
Emanuele De Angelis ◽  
Fabio Fioravanti ◽  
Alberto Pettorossi ◽  
Maurizio Proietti

The transformation of constraint logic programs (CLP programs)has been shown to be an effective methodologyfor verifying properties of imperative programs.By following this methodology, we encode the negationof a partial correctness property of an imperativeprogram prog as a predicate incorrect defined by a CLP program P, and we show thatprog is correct by transforming P intothe empty program through the applicationof semantics preserving transformation rules.Some of these rules perform replacements of constraintsthat encode properties of the data structures manipulatedby the program prog.In this paper we show that Constraint Handling Rules (CHR)are a suitable formalism for representing and applyingconstraint replacements during the transformation of CLP programs.In particular, we consider programs that manipulate integerarrays and we present a CHR encoding of a constraint replacementstrategy based on the theory of arrays.We also propose a novel generalization strategy forconstraints on integer arrays that combinesthe CHR constraint replacement strategywith various generalization operator for linear constraints,such as widening and convex hull.Generalization is controlled by additional constraintsthat relate the variable identifiers in the imperativeprogram and the CLP representation of their values.The method presented in this paper has been implemented andwe have demonstrated itseffectiveness on a set ofbenchmark programs taken from the literature.


Author(s):  
Sebastijan Dumancic ◽  
Tias Guns ◽  
Wannes Meert ◽  
Hendrik Blockeel

Deep learning methods capable of handling relational data have proliferated over the past years. In contrast to traditional relational learning methods that leverage first-order logic for representing such data, these methods aim at re-representing symbolic relational data in Euclidean space. They offer better scalability, but can only approximate rich relational structures and are less flexible in terms of reasoning. This paper introduces a novel framework for relational representation learning that combines the best of both worlds. This framework, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the the original and latent representation is done by means of logic programs instead of neural networks. We show how learning can be cast as a constraint optimisation problem for which existing solvers can be used. The use of logic as a representation language makes the proposed framework more accurate (as the representation is exact, rather than approximate), more flexible, and more interpretable than deep learning methods. We experimentally show that these latent representations are indeed beneficial in relational learning tasks.


2012 ◽  
Vol 177-179 ◽  
pp. 1-24 ◽  
Author(s):  
Vernon Asuncion ◽  
Fangzhen Lin ◽  
Yan Zhang ◽  
Yi Zhou

2009 ◽  
Vol 19 (12) ◽  
pp. 3091-3099 ◽  
Author(s):  
Gui-Hong XU ◽  
Jian ZHANG

Author(s):  
Tim Button ◽  
Sean Walsh

Chapters 6-12 are driven by questions about the ability to pin down mathematical entities and to articulate mathematical concepts. This chapter is driven by similar questions about the ability to pin down the semantic frameworks of language. It transpires that there are not just non-standard models, but non-standard ways of doing model theory itself. In more detail: whilst we normally outline a two-valued semantics which makes sentences True or False in a model, the inference rules for first-order logic are compatible with a four-valued semantics; or a semantics with countably many values; or what-have-you. The appropriate level of generality here is that of a Boolean-valued model, which we introduce. And the plurality of possible semantic values gives rise to perhaps the ‘deepest’ level of indeterminacy questions: How can humans pin down the semantic framework for their languages? We consider three different ways for inferentialists to respond to this question.


Sign in / Sign up

Export Citation Format

Share Document