computation trees
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 3)

H-INDEX

6
(FIVE YEARS 1)

10.29007/37lf ◽  
2020 ◽  
Author(s):  
Erika Abraham ◽  
Ezio Bartocci ◽  
Borzoo Bonakdarpour ◽  
Oyendrila Dobe

In this paper, we study the parameter synthesis problem for probabilistic hyperproper- ties. A probabilistic hyperproperty stipulates quantitative dependencies among a set of executions. In particular, we solve the following problem: given a probabilistic hyperprop- erty ψ and discrete-time Markov chain D with parametric transition probabilities, compute regions of parameter configurations that instantiate D to satisfy ψ, and regions that lead to violation. We address this problem for a fragment of the temporal logic HyperPCTL that allows expressing quantitative reachability relation among a set of computation trees. We illustrate the application of our technique in the areas of differential privacy, probabilistic nonintereference, and probabilistic conformance.


10.29007/wscr ◽  
2020 ◽  
Author(s):  
Shuang Xia ◽  
Krysia Broda ◽  
Alessandra Russo

Various sub-symbolic approaches for reasoning and learning have been proposed in the literature. Among these approaches, the neural theorem prover (NTP) approach uses a backward chaining reasoning mechanism to guide a machine learning architecture to learn vector embedding representations of predicates and to induce first-order clauses from a given knowledge base. NTP is however known for being not scalable, as the computation trees generated by the backward chaining process can grow exponentially with the size of the given knowledge base. In this paper we address this limitation by extending the NTP approach with a topic-based method for controlling the induction of first-order clauses. Our proposed approach, called TNTP for Topical NTP, identifies topic-based clusters over a large knowledge-base and uses these clusters to control the soft unification of predicates during the learning process with the effect of reducing the size of the computation tree needed to induce first-order clauses. Our TNTP framework is capable of learning a diverse set of induced rules that have improved predictive accuracy, whilst reducing the computational time by several orders of magnitude. We demonstrated this by evaluating our approach on three different datasets (UMLS, Kinship and Nations) and comparing our results with that of the NTP method, chosen here as our baseline.


Author(s):  
DANIEL HILLERSTRÖM ◽  
SAM LINDLEY ◽  
ROBERT ATKEY

Abstract Plotkin and Pretnar’s effect handlers offer a versatile abstraction for modular programming with user-defined effects. This paper focuses on foundations for implementing effect handlers, for the three different kinds of effect handlers that have been proposed in the literature: deep, shallow, and parameterised. Traditional deep handlers are defined by folds over computation trees and are the original construct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. We formulate the extensions both directly and via encodings in terms of deep handlers and illustrate how the direct implementations avoid the generation of unnecessary closures. We give two distinct foundational implementations of all the kinds of handlers we consider: a continuation-passing style (CPS) transformation and a CEK-style abstract machine. In both cases, the key ingredient is a generalisation of the notion of continuation to accommodate stacks of effect handlers. We obtain our CPS translation through a series of refinements as follows. We begin with a first-order CPS translation into untyped lambda calculus which manages a stack of continuations and handlers as a curried sequence of arguments. We then refine the initial CPS translation by uncurrying it to yield a properly tail-recursive translation and then moving towards more and more intensional representations of continuations in order to support different kinds of effect handlers. Finally, we make the translation higher order in order to contract administrative redexes at translation time. Our abstract machine design then uses the same generalised continuation representation as the CPS translation. We have implemented both the abstract machine and the CPS transformation (plus extensions) as backends for the Links web programming language.


2017 ◽  
Vol 28 (03) ◽  
pp. 195-210 ◽  
Author(s):  
Alexandros Palioudakis ◽  
Kai Salomaa ◽  
Selim G. Akl

Many nondeterminism measures for finite automata have been studied in the literature. The tree width of an NFA (nondeterministic finite automaton) counts the number of leaves of computation trees as a function of input length. The trace of an NFA is defined in terms of the largest product of the degrees of nondeterministic choices in computations on inputs of given length. Branching is the corresponding best case measure based on the product of nondeterministic choices in the computation that minimizes this value. We establish upper and lower bounds for the trace of an NFA in terms of its tree width. We give a tight bound for the size blow-up of determinizing an NFA with finite trace. Also we show that the trace of any NFA either is bounded by a constant or grows exponentially.


2010 ◽  
Vol 2010 ◽  
pp. 1-17 ◽  
Author(s):  
Eric Psota ◽  
Lance C. Pérez

The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion ofproblematictrapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.


2009 ◽  
Vol 19 (1) ◽  
pp. 133-157 ◽  
Author(s):  
LORENZO TRALDI

The interlace polynomials introduced by Arratia, Bollobás and Sorkin extend to invariants of graphs with vertex weights, and these weighted interlace polynomials have several novel properties. One novel property is a version of the fundamental three-term formulathat lacks the last term. It follows that interlace polynomial computations can be represented by binary trees rather than mixed binary–ternary trees. Binary computation trees provide a description ofq(G) that is analogous to the activities description of the Tutte polynomial. IfGis a tree or forest then these ‘algorithmic activities’ are associated with a certain kind of independent set inG. Three other novel properties are weighted pendant-twin reductions, which involve removing certain kinds of vertices from a graph and adjusting the weights of the remaining vertices in such a way that the interlace polynomials are unchanged. These reductions allow for smaller computation trees as they eliminate some branches. If a graph can be completely analysed using pendant-twin reductions, then its interlace polynomial can be calculated in polynomial time. An intuitively pleasing property is that graphs which can be constructed through graph substitutions have vertex-weighted interlace polynomials which can be obtained through algebraic substitutions.


Sign in / Sign up

Export Citation Format

Share Document