A Fresh Look at Zones and Octagons

2021 ◽  
Vol 43 (3) ◽  
pp. 1-51
Author(s):  
Graeme Gange ◽  
Zequn Ma ◽  
Jorge A. Navas ◽  
Peter Schachte ◽  
Harald Søndergaard ◽  
...  

Zones and Octagons are popular abstract domains for static program analysis. They enable the automated discovery of simple numerical relations that hold between pairs of program variables. Both domains are well understood mathematically but the detailed implementation of static analyses based on these domains poses many interesting algorithmic challenges. In this article, we study the two abstract domains, their implementation and use. Utilizing improved data structures and algorithms for the manipulation of graphs that represent difference-bound constraints, we present fast implementations of both abstract domains, built around a common infrastructure. We compare the performance of these implementations against alternative approaches offering the same precision. We quantify the differences in performance by measuring their speed and precision on standard benchmarks. We also assess, in the context of software verification, the extent to which the improved precision translates to better verification outcomes. Experiments demonstrate that our new implementations improve the state of the art for both Zones and Octagons significantly.

Author(s):  
Freark I. van der Berg

AbstractMulti-threaded unit tests for high-performance thread-safe data structures typically do not test all behaviour, because only a single scheduling of threads is witnessed per invocation of the unit tests. Model checking such unit tests allows to verify all interleavings of threads. These tests could be written in or compiled to LLVM IR. Existing LLVM IR model checkers like divine and Nidhugg, use an LLVM IR interpreter to determine the next state. This paper introduces llmc, a multi-core explicit-state model checker of multi-threaded LLVM IR that translates LLVM IR to LLVM IR that is executed instead of interpreted. A test suite of 24 tests, stressing data structures, shows that on average llmc clearly outperforms the state-of-the-art tools divine and Nidhugg.


2014 ◽  
Vol 22 (2) ◽  
pp. 351-359 ◽  
Author(s):  
Tianjun Liao ◽  
Daniel Molina ◽  
Marco A. Montes de Oca ◽  
Thomas Stützle

The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC’05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC’05 benchmark function suite. This occurs even though the optima of the CEC’05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC’05 problems.


Author(s):  
Dirk Beyer ◽  
Heike Wehrheim

Abstract The goal of cooperative verification is to combine verification approaches in such a way that they work together to verify a system model. In particular, cooperative verifiers provide exchangeable information (verification artifacts) to other verifiers or consume such information from other verifiers with the goal of increasing the overall effectiveness and efficiency of the verification process. This paper first gives an overview over approaches for leveraging strengths of different techniques, algorithms, and tools in order to increase the power and abilities of the state of the art in software verification. To limit the scope, we restrict our overview to tools and approaches for automatic program analysis. Second, we specifically outline cooperative verification approaches and discuss their employed verification artifacts. Third, we formalize all artifacts in a uniform way, thereby fixing their semantics and providing verifiers with a precise meaning of the exchanged information.


2006 ◽  
Vol 15 (01) ◽  
pp. 21-52 ◽  
Author(s):  
PETER BAUMGARTNER ◽  
ALEXANDER FUCHS ◽  
CESARE TINELLI

Darwin is the first implementation of the Model Evolution Calculus by Baumgartner and Tinelli. The Model Evolution Calculus lifts the DPLL procedure to first-order logic. Darwin is meant to be a fast and clean implementation of the calculus, showing its effectiveness and providing a base for further improvements and extensions. Based on a brief summary of the Model Evolution Calculus, we describe in the main part of the paper Darwin's proof procedure and its data structures and algorithms, discussing the main design decisions and features that influence Darwin's performance. We also report on practical experiments carried out with problems from the CASC-J2 system competition and parts of the TPTP Problem Library, and compare the results with those of other state-of-the-art theorem provers.


2002 ◽  
Vol 2 (2) ◽  
pp. 233-261 ◽  
Author(s):  
ENEA ZAFFANELLA ◽  
PATRICIA M. HILL ◽  
ROBERTO BAGNARA

Complementation, the inverse of the reduced product operation, is a technique for systematically finding minimal decompositions of abstract domains. Filé and Ranzato advanced the state of the art by introducing a simple method for computing a complement. As an application, they considered the extraction by complementation of the pair-sharing domain PS from the Jacobs and Langen's set-sharing domain SH. However, since the result of this operation was still SH, they concluded that PS was too abstract for this. Here, we show that the source of this result lies not with PS but with SH and, more precisely, with the redundant information contained in SH with respect to ground-dependencies and pair-sharing. In fact, a proper decomposition is obtained if the non-redundant version of SH, PSD, is substituted for SH. To establish the results for PSD, we define a general schema for subdomains of SH that includes PSD and Def as special cases. This sheds new light on the structure of PSD and exposes a natural though unexpected connection between Def and PSD. Moreover, we substantiate the claim that complementation alone is not sufficient to obtain truly minimal decompositions of domains. The right solution to this problem is to first remove redundancies by computing the quotient of the domain with respect to the observable behavior, and only then decompose it by complementation.


10.29007/jhd7 ◽  
2018 ◽  
Author(s):  
Armin Biere

One of the design principles of the state-of-the-art SAT solver Lingeling is to use as compact data structures as possible. These reduce memory usage, increase cache efficiency and thus improve run-time, particularly, when using multiple solver instances on multi-core machines, as in our parallel portfolio solver Plingeling and our cube and conquer solver Treengeling. The scheduler of a dozen inprocessing algorithms is an important aspect of Lingeling as well. In this talk we explain these design and implementation aspects of Lingeling and discuss new direction of solver design.


Sign in / Sign up

Export Citation Format

Share Document