Lecture Notes in Computer Science - Foundations of Software Science and Computation Structures
Latest Publications


TOTAL DOCUMENTS

32
(FIVE YEARS 32)

H-INDEX

1
(FIVE YEARS 1)

Published By Springer International Publishing

9783030452308, 9783030452315

Author(s):  
Jiří Adámek ◽  
Stefan Milius ◽  
Lawrence S. Moss

AbstractThis paper studies fundamental questions concerning category-theoretic models of induction and recursion. We are concerned with the relationship between well-founded and recursive coalgebras for an endofunctor. For monomorphism preserving endofunctors on complete and well-powered categories every coalgebra has a well-founded part, and we provide a new, shorter proof that this is the coreflection in the category of all well-founded coalgebras. We present a new more general proof of Taylor’s General Recursion Theorem that every well-founded coalgebra is recursive, and we study conditions which imply the converse. In addition, we present a new equivalent characterization of well-foundedness: a coalgebra is well-founded iff it admits a coalgebra-to-algebra morphism to the initial algebra.


Author(s):  
Usama Mehmood ◽  
Shouvik Roy ◽  
Radu Grosu ◽  
Scott A. Smolka ◽  
Scott D. Stoller ◽  
...  

AbstractWe show how a symmetric and fully distributed flocking controller can be synthesized using Deep Learning from a centralized flocking controller. Our approach is based on Supervised Learning, with the centralized controller providing the training data, in the form of trajectories of state-action pairs. We use Model Predictive Control (MPC) for the centralized controller, an approach that we have successfully demonstrated on flocking problems. MPC-based flocking controllers are high-performing but also computationally expensive. By learning a symmetric and distributed neural flocking controller from a centralized MPC-based one, we achieve the best of both worlds: the neural controllers have high performance (on par with the MPC controllers) and high efficiency. Our experimental results demonstrate the sophisticated nature of the distributed controllers we learn. In particular, the neural controllers are capable of achieving myriad flocking-oriented control objectives, including flocking formation, collision avoidance, obstacle avoidance, predator avoidance, and target seeking. Moreover, they generalize the behavior seen in the training data to achieve these objectives in a significantly broader range of scenarios. In terms of verification of our neural flocking controller, we use a form of statistical model checking to compute confidence intervals for its convergence rate and time to convergence.


Author(s):  
Mario Alvarez-Picallo ◽  
Jean-Simon Pacaud Lemay

AbstractCartesian differential categories are categories equipped with a differential combinator which axiomatizes the directional derivative. Important models of Cartesian differential categories include classical differential calculus of smooth functions and categorical models of the differential $$\lambda $$ λ -calculus. However, Cartesian differential categories cannot account for other interesting notions of differentiation such as the calculus of finite differences or the Boolean differential calculus. On the other hand, change action models have been shown to capture these examples as well as more “exotic” examples of differentiation. However, change action models are very general and do not share the nice properties of a Cartesian differential category. In this paper, we introduce Cartesian difference categories as a bridge between Cartesian differential categories and change action models. We show that every Cartesian differential category is a Cartesian difference category, and how certain well-behaved change action models are Cartesian difference categories. In particular, Cartesian difference categories model both the differential calculus of smooth functions and the calculus of finite differences. Furthermore, every Cartesian difference category comes equipped with a tangent bundle monad whose Kleisli category is again a Cartesian difference category.


Author(s):  
Mai Gehrke ◽  
Tomáš Jakl ◽  
Luca Reggio

AbstractA systematic theory of structural limits for finite models has been developed by Nešetřil and Ossona de Mendez. It is based on the insight that the collection of finite structures can be embedded, via a map they call the Stone pairing, in a space of measures, where the desired limits can be computed. We show that a closely related but finer grained space of measures arises — via Stone-Priestley duality and the notion of types from model theory — by enriching the expressive power of first-order logic with certain “probabilistic operators”. We provide a sound and complete calculus for this extended logic and expose the functorial nature of this construction.The consequences are two-fold. On the one hand, we identify the logical gist of the theory of structural limits. On the other hand, our construction shows that the duality-theoretic variant of the Stone pairing captures the adding of a layer of quantifiers, thus making a strong link to recent work on semiring quantifiers in logic on words. In the process, we identify the model theoretic notion of types as the unifying concept behind this link. These results contribute to bridging the strands of logic in computer science which focus on semantics and on more algorithmic and complexity related areas, respectively.


Author(s):  
Ugo Dal Lago ◽  
Giulio Guerrieri ◽  
Willem Heijltjes

AbstractA notion of probabilistic lambda-calculus usually comes with a prescribed reduction strategy, typically call-by-name or call-by-value, as the calculus is non-confluent and these strategies yield different results. This is a break with one of the main advantages of lambda-calculus: confluence, which means results are independent from the choice of strategy. We present a probabilistic lambda-calculus where the probabilistic operator is decomposed into two syntactic constructs: a generator, which represents a probabilistic event; and a consumer, which acts on the term depending on a given event. The resulting calculus, the Probabilistic Event Lambda-Calculus, is confluent, and interprets the call-by-name and call-by-value strategies through different interpretations of the probabilistic operator into our generator and consumer constructs. We present two notions of reduction, one via fine-grained local rewrite steps, and one by generation and consumption of probabilistic events. Simple types for the calculus are essentially standard, and they convey strong normalization. We demonstrate how we can encode call-by-name and call-by-value probabilistic evaluation.


Author(s):  
David M. Kahn ◽  
Jan Hoffmann

AbstractAutomatic amortized resource analysis (AARA) is a type-based technique for inferring concrete (non-asymptotic) bounds on a program’s resource usage. Existing work on AARA has focused on bounds that are polynomial in the sizes of the inputs. This paper presents and extension of AARA to exponential bounds that preserves the benefits of the technique, such as compositionality and efficient type inference based on linear constraint solving. A key idea is the use of the Stirling numbers of the second kind as the basis of potential functions, which play the same role as the binomial coefficients in polynomial AARA. To formalize the similarities with the existing analyses, the paper presents a general methodology for AARA that is instantiated to the polynomial version, the exponential version, and a combined system with potential functions that are formed by products of Stirling numbers and binomial coefficients. The soundness of exponential AARA is proved with respect to an operational cost semantics and the analysis of representative example programs demonstrates the effectiveness of the new analysis.


Author(s):  
S. Akshay ◽  
Blaise Genest ◽  
Loïc Hélouët ◽  
Sharvik Mital

AbstractNegotiations were introduced in [6] as a model for concurrent systems with multiparty decisions. What is very appealing with negotiations is that it is one of the very few non-trivial concurrent models where several interesting problems, such as soundness, i.e. absence of deadlocks, can be solved in PTIME [3]. In this paper, we introduce the model of timed negotiations and consider the problem of computing the minimum and the maximum execution times of a negotiation. The latter can be solved using the algorithm of [10] computing costs in negotiations, but surprisingly minimum execution time cannot.This paper proposes new algorithms to compute both minimum and maximum execution time, that work in much more general classes of negotiations than [10], that only considered sound and deterministic negotiations. Further, we uncover the precise complexities of these questions, ranging from PTIME to $$\varDelta _2^P$$ Δ 2 P -complete. In particular, we show that computing the minimum execution time is more complex than computing the maximum execution time in most classes of negotiations we consider.


Author(s):  
Alessio Mansutti

AbstractWe describe a set of simple features that are sufficient in order to make the satisfiability problem of logics interpreted on trees Tower-hard. We exhibit these features through an Auxiliary Logic on Trees (), a modal logic that essentially deals with reachability of a fixed node inside a forest and features modalities from sabotage modal logic to reason on submodels. After showing that admits a Tower-complete satisfiability problem, we prove that this logic is captured by four other logics that were independently found to be Tower-complete: two-variables separation logic, quantified computation tree logic, modal logic of heaps and modal separation logic. As a by-product of establishing these connections, we discover strict fragments of these logics that are still non-elementary.


Author(s):  
Patricia Johann ◽  
Andrew Polonsky

AbstractThis paper introduces deep induction, and shows that it is the notion of induction most appropriate to nested types and other data types defined over, or mutually recursively with, (other) such types. Standard induction rules induct over only the top-level structure of data, leaving any data internal to the top-level structure untouched. By contrast, deep induction rules induct over all of the structured data present. We give a grammar generating a robust class of nested types (and thus ADTs), and develop a fundamental theory of deep induction for them using their recently defined semantics as fixed points of accessible functors on locally presentable categories. We then use our theory to derive deep induction rules for some common ADTs and nested types, and show how these rules specialize to give the standard structural induction rules for these types. We also show how deep induction specializes to solve the long-standing problem of deriving principled and practically useful structural induction rules for bushes and other truly nested types. Overall, deep induction opens the way to making induction principles appropriate to richly structured data types available in programming languages and proof assistants. Agda implementations of our development and examples, including two extended case studies, are available.


Author(s):  
Romain Péchoux ◽  
Simon Perdrix ◽  
Mathys Rennela ◽  
Vladimir Zamdzhiev

AbstractInductive datatypes in programming languages allow users to define useful data structures such as natural numbers, lists, trees, and others. In this paper we show how inductive datatypes may be added to the quantum programming language QPL. We construct a sound categorical model for the language and by doing so we provide the first detailed semantic treatment of user-defined inductive datatypes in quantum programming. We also show our denotational interpretation is invariant with respect to big-step reduction, thereby establishing another novel result for quantum programming. Compared to classical programming, this property is considerably more difficult to prove and we demonstrate its usefulness by showing how it immediately implies computational adequacy at all types. To further cement our results, our semantics is entirely based on a physically natural model of von Neumann algebras, which are mathematical structures used by physicists to study quantum mechanics.


Sign in / Sign up

Export Citation Format

Share Document