concurrent program
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 13)

H-INDEX

14
(FIVE YEARS 1)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-31
Author(s):  
Lennard Gäher ◽  
Michael Sammler ◽  
Simon Spies ◽  
Ralf Jung ◽  
Hoang-Hai Dang ◽  
...  

Today’s compilers employ a variety of non-trivial optimizations to achieve good performance. One key trick compilers use to justify transformations of concurrent programs is to assume that the source program has no data races : if it does, they cause the program to have undefined behavior (UB) and give the compiler free rein. However, verifying correctness of optimizations that exploit this assumption is a non-trivial problem. In particular, prior work either has not proven that such optimizations preserve program termination (particularly non-obvious when considering optimizations that move instructions out of loop bodies), or has treated all synchronization operations as external functions (losing the ability to reorder instructions around them). In this work we present Simuliris , the first simulation technique to establish termination preservation (under a fair scheduler) for a range of concurrent program transformations that exploit UB in the source language. Simuliris is based on the idea of using ownership to reason modularly about the assumptions the compiler makes about programs with well-defined behavior. This brings the benefits of concurrent separation logics to the space of verifying program transformations: we can combine powerful reasoning techniques such as framing and coinduction to perform thread-local proofs of non-trivial concurrent program optimizations. Simuliris is built on a (non-step-indexed) variant of the Coq-based Iris framework, and is thus not tied to a particular language. In addition to demonstrating the effectiveness of Simuliris on standard compiler optimizations involving data race UB, we also instantiate it with Jung et al.’s Stacked Borrows semantics for Rust and generalize their proofs of interesting type-based aliasing optimizations to account for concurrency.


2021 ◽  
Author(s):  
◽  
Allan Tabilog

<p>This thesis explores two kinds of program logics that have become important for modern program verification - separation logic, for reasoning about programs that use pointers to build mutable data structures, and rely guarantee reasoning, for reasoning about shared variable concurrent programs. We look more closely into the motivations for merging these two kinds of logics into a single formalism that exploits the benefits of both approaches - local, modular, and explicit reasoning about interference between threads in a shared memory concurrent program. We discuss in detail two such formalisms - RGSep and Local Rely Guarantee (LRG), in particular we analyse how each formalism models program state and treats the distinction between global state (shared by all threads) and local state (private to a given thread) and how each logic models actions performed by threads on shared state, and look into the proof rules specifically for reasoning about atomic blocks of code. We present full examples of proofs in each logic and discuss their differences. This thesis also illustrates how a weakest precondition semantics for separation logic can be used to carry out calculational proofs. We also note how in essence these proofs are data abstraction proofs showing that a data structure implements some abstract data type, and relate this idea to a classic data abstraction technique by Hoare. Finally, as part of the thesis we also present a survey of tools that are currently available for doing manual or semi-automated proofs as well as program analyses with separation logic and rely guarantee.</p>


2021 ◽  
Author(s):  
◽  
Allan Tabilog

<p>This thesis explores two kinds of program logics that have become important for modern program verification - separation logic, for reasoning about programs that use pointers to build mutable data structures, and rely guarantee reasoning, for reasoning about shared variable concurrent programs. We look more closely into the motivations for merging these two kinds of logics into a single formalism that exploits the benefits of both approaches - local, modular, and explicit reasoning about interference between threads in a shared memory concurrent program. We discuss in detail two such formalisms - RGSep and Local Rely Guarantee (LRG), in particular we analyse how each formalism models program state and treats the distinction between global state (shared by all threads) and local state (private to a given thread) and how each logic models actions performed by threads on shared state, and look into the proof rules specifically for reasoning about atomic blocks of code. We present full examples of proofs in each logic and discuss their differences. This thesis also illustrates how a weakest precondition semantics for separation logic can be used to carry out calculational proofs. We also note how in essence these proofs are data abstraction proofs showing that a data structure implements some abstract data type, and relate this idea to a classic data abstraction technique by Hoare. Finally, as part of the thesis we also present a survey of tools that are currently available for doing manual or semi-automated proofs as well as program analyses with separation logic and rely guarantee.</p>


2021 ◽  
Author(s):  
◽  
Morgan Atkins

<p>In this thesis, we investigate some of the options programmers have when writing a concurrent program. We explore the use of manually created threads, thread-pools, actors, and Software Transactional Memory. We use these techniques to implement case studies of various kinds: a video game, a physical simulation, an image-processing application, and a concurrent data structure. Through-out these case studies, we notice a common thread: concurrency, applied correctly, can improve the performance of a program—but the correct application may not be readily apparent. Concurrency is an important tool in the toolbox of the modern programmer, especially with the rise of multi-core architectures and the increasing prevalence of distributed systems. And like any tool, it is important to understand how and when to use it.</p>


2021 ◽  
Author(s):  
◽  
Morgan Atkins

<p>In this thesis, we investigate some of the options programmers have when writing a concurrent program. We explore the use of manually created threads, thread-pools, actors, and Software Transactional Memory. We use these techniques to implement case studies of various kinds: a video game, a physical simulation, an image-processing application, and a concurrent data structure. Through-out these case studies, we notice a common thread: concurrency, applied correctly, can improve the performance of a program—but the correct application may not be readily apparent. Concurrency is an important tool in the toolbox of the modern programmer, especially with the rise of multi-core architectures and the increasing prevalence of distributed systems. And like any tool, it is important to understand how and when to use it.</p>


2021 ◽  
Vol 33 (2) ◽  
pp. 185-205 ◽  
Author(s):  
Wim H. Hesselink

AbstractUNITY is a model for concurrent specifications with a complete logic for proving progress properties of the form “P leads to Q”. UNITY is generalized to U-specifications by giving more freedom to specify the steps that are to be taken infinitely often. In particular, these steps can correspond to non-total relations. The generalization keeps the logic sound and complete. The paper exploits the generalization in two ways. Firstly, the logic remains sound when the specification is extended with hypotheses of the form “F leads to G”. As the paper shows, this can make the logic incomplete. The generalization is used to show that the logic remains complete, if the added hypotheses “F leads to G” satisfy “F unless G”. The main result extends the applicability and completeness of UNITY logic to proofs that a given concurrent program satisfies any given formula of LTL, linear temporal logic, without the next-operator which is omitted because it is sensitive to stuttering. For this purpose, the program, written as a UNITY program, is extended with a number of boolean variables. The proof method relies on implementing the LTL formula, i.e., restricting the specification in such a way that only those runs remain that satisfy the formula. This result is a variation of the classical construction of a Büchi automatonfor a given LTL formula that accepts precisely those runs that satisfy the formula.


Author(s):  
Michael Blondin ◽  
Christoph Haase ◽  
Philip Offtermatt

AbstractNumerous tasks in program analysis and synthesis reduce to deciding reachability in possibly infinite graphs such as those induced by Petri nets. However, the Petri net reachability problem has recently been shown to require non-elementary time, which raises questions about the practical applicability of Petri nets as target models. In this paper, we introduce a novel approach for efficiently semi-deciding the reachability problem for Petri nets in practice. Our key insight is that computationally lightweight over-approximations of Petri nets can be used as distance oracles in classical graph exploration algorithms such as $$\mathsf {A}^{*}$$ A ∗ and greedy best-first search. We provide and evaluate a prototype implementation of our approach that outperforms existing state-of-the-art tools, sometimes by orders of magnitude, and which is also competitive with domain-specific tools on benchmarks coming from program synthesis and concurrent program analysis.


2021 ◽  
Vol 31 ◽  
Author(s):  
ROBERT SISON ◽  
TOBY MURRAY

Abstract Proving only over source code that programs do not leak sensitive data leaves a gap between reasoning and reality that can only be filled by accounting for the behaviour of the compiler. Furthermore, software does not always have the luxury of limiting itself to single-threaded computation with resources statically dedicated to each user to ensure the confidentiality of their data. This results in mixed-sensitivity concurrent programs, which might reuse memory shared between their threads to hold data of different sensitivity levels at different times; for such programs, a compiler must preserve the value-dependent coordination of such mixed-sensitivity reuse despite the impact of concurrency. Here we demonstrate, using Isabelle/HOL, that it is feasible to verify that a compiler preserves noninterference, the strictest kind of confidentiality property, for mixed-sensitivity concurrent programs. First, we present notions of refinement that preserve a concurrent value-dependent notion of noninterference that we have designed to support such programs. As proving noninterference-preserving refinement can be considerably more complex than the standard refinements typically used to verify semantics-preserving compilation, our notions include a decomposition principle that separates the semantics preservation from security preservation concerns. Second, we demonstrate that these refinement notions are applicable to verified secure compilation, by exercising them on a single-pass compiler for mixed-sensitivity concurrent programs that synchronise using mutex locks, from a generic imperative language to a generic RISC-style assembly language. Finally, we execute our compiler on a non-trivial mixed-sensitivity concurrent program modelling a real-world use case, thus preserving its source-level noninterference properties down to an assembly-level model automatically. All results are formalised and proved in the Isabelle/HOL interactive proof assistant. Our work paves the way for more fully featured compilers to offer verified secure compilation support to developers of multithreaded software that must handle data of multiple sensitivity levels.


Sign in / Sign up

Export Citation Format

Share Document