scholarly journals Dartagnan: Leveraging Compiler Optimizations and the Price of Precision (Competition Contribution)

Author(s):  
Hernán Ponce-de-León ◽  
Thomas Haas ◽  
Roland Meyer

AbstractWe describe the new features of the bounded model checker Dartagnan for SV-COMP ’21. We participate, for the first time, in the ReachSafety category on the verification of sequential programs. In some of these verification tasks, bugs only show up after many loop iterations, which is a challenge for bounded model checking. We address the challenge by simplifying the structure of the input program while preserving its semantics. For simplification, we leverage common compiler optimizations, which we get for free by using LLVM. Yet, there is a price to pay. Compiler optimizations may introduce bitwise operations, which require bit-precise reasoning. We evaluated an SMT encoding based on the theory of integers + bit conversions against one based on the theory of bit-vectors and found that the latter yields better performance. Compared to the unoptimized version of Dartagnan, the combination of compiler optimizations and bit-vectors yields a speed-up of an order of magnitude on average.

Author(s):  
Gianpiero Cabodi ◽  
Paolo Camurati ◽  
Marco Palena ◽  
Paolo Pasini ◽  
Danilo Vendraminetto

2013 ◽  
Vol 380-384 ◽  
pp. 1239-1242
Author(s):  
Rui Wang ◽  
Xian Jin Fu

Bounded Model Checking is an efficient method of finding bugs in system designs. LTL is one of the most frequently used specification languages in model checking. In this paper, We present an linearization encoding for LTL bounded model checking. We use the incremental SAT technology to solve the BMC problem. We implement the new encoding in NuSMV model checker.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 1057
Author(s):  
Gianpiero Cabodi ◽  
Paolo Camurati ◽  
Fabrizio Finocchiaro ◽  
Danilo Vendraminetto

Spectre and Meltdown attacks in modern microprocessors represent a new class of attacks that have been difficult to deal with. They underline vulnerabilities in hardware design that have been going unnoticed for years. This shows the weakness of the state-of-the-art verification process and design practices. These attacks are OS-independent, and they do not exploit any software vulnerabilities. Moreover, they violate all security assumptions ensured by standard security procedures, (e.g., address space isolation), and, as a result, every security mechanism built upon these guarantees. These vulnerabilities allow the attacker to retrieve leaked data without accessing the secret directly. Indeed, they make use of covert channels, which are mechanisms of hidden communication that convey sensitive information without any visible information flow between the malicious party and the victim. The root cause of this type of side-channel attacks lies within the speculative and out-of-order execution of modern high-performance microarchitectures. Since modern processors are hard to verify with standard formal verification techniques, we present a methodology that shows how to transform a realistic model of a speculative and out-of-order processor into an abstract one. Following related formal verification approaches, we simplify the model under consideration by abstraction and refinement steps. We also present an approach to formally verify the abstract model using a standard model checker. The theoretical flow, reliant on established formal verification results, is introduced and a sketch of proof is provided for soundness and correctness. Finally, we demonstrate the feasibility of our approach, by applying it on a pipelined DLX RISC-inspired processor architecture. We show preliminary experimental results to support our claim, performing Bounded Model-Checking with a state-of-the-art model checker.


2013 ◽  
Vol 659 ◽  
pp. 181-185
Author(s):  
Wei Gong ◽  
Jun Wei Jia

Model Checking is a method for verification. The model will be checked until the specification of it is proved or disproved. With the rising complexity of big models, there are non-checkable cases, in which cases the problem can be analyzed by some models, for example, bounded Model Checking means to analyze the model until a defined time or depth. The verification happens automatically. The programs for doing this are called Model Checking Tools or Model Checker. Model Checking are used in both software and hardware verification. It is an inherent part of hardware verification, whereas it is less used in the software verification.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Rui Wang ◽  
Wanwei Liu ◽  
Tun Li ◽  
Xiaoguang Mao ◽  
Ji Wang

As a complementary technique of the BDD-based approach, bounded model checking (BMC) has been successfully applied to LTL symbolic model checking. However, the expressiveness of LTL is rather limited, and some important properties cannot be captured by such logic. In this paper, we present a semantic BMC encoding approach to deal with the mixture ofETLfandETLl. Since such kind of temporal logic involves both finite and looping automata as connectives, all regular properties can be succinctly specified with it. The presented algorithm is integrated into the model checker ENuSMV, and the approach is evaluated via conducting a series of imperial experiments.


Author(s):  
Hernán Ponce-de-León ◽  
Florian Furbach ◽  
Keijo Heljanko ◽  
Roland Meyer

Abstract Dartagnanis a bounded model checker for concurrent programs under weak memory models. What makes it different from other tools is that the memory model is not hard-coded inside Dartagnanbut taken as part of the input. For SV-COMP’20, we take as input sequential consistency (i.e. the standard interleaving memory model) extended by support for atomic blocks. Our point is to demonstrate that a universal tool can be competitive and perform well in SV-COMP. Being a bounded model checker, Dartagnan’s focus is on disproving safety properties by finding counterexample executions. For programs with bounded loops, Dartagnanperforms an iterative unwinding that results in a complete analysis. The SV-COMP’20 version of Dartagnanworks on Boogiecode. The C programs of the competition are translated internally to Boogieusing SMACK.


2015 ◽  
Author(s):  
Phillipe Pereira ◽  
Higo Albuquerque ◽  
Hendrio Marques ◽  
Isabela Silva ◽  
Vanessa Santos ◽  
...  

Este artigo apresenta uma extensão da ferramenta Efficient SMTBased Context-Bounded Model Checker (ESBMC) para verificar programas que executam em unidades de processamento gráfico (GPU), chamado de ESBMCGPU. Em especial, ESBMC-GPU é um verificador de modelos limitado baseado nas teorias do módulo da satisfatibilidade para programas desenvolvidos na arquitetura de dispositivo unificado de computação (CUDA). O ESBMC-GPU é baseado em um modelo operacional, uma representação abstrata das bibliotecas padrões do CUDA que conservadoramente aproxima suas semânticas. Com ESBMC-GPU, é possível verificar mais programas CUDA reais do que outras abordagens existentes.


2012 ◽  
Vol 9 (4) ◽  
pp. 1431-1451 ◽  
Author(s):  
Lanfang Tan ◽  
Qingping Tan ◽  
Jianjun Xu ◽  
Huiping Zhou

In recent decades, reliability in the presence of transient faults has been a significant problem. To mitigate the effects of transient faults, fault-tolerant techniques are proposed. However, validating the effectiveness of fault-tolerant techniques is another problem. In this paper, we present an original approach to evaluate the effectiveness of signature-monitoring mechanisms. The approach is based on model-checking principles. First, the fault tolerant model is proposed using step-operational semantics. Second, the fault model is refined into a state transition system that is translated into the input program of the symbolic model checker NuSMV. Using NuSMV, two reprehensive signature-monitoring algorithms are verified. The approach avoids the state space explosion problem and the verification was completed with practical time. The verification results reveal some undetected errors, which have not been previously observed.


2021 ◽  
Vol 33 (1) ◽  
Author(s):  
Eckard Helmers ◽  
Chia Chien Chang ◽  
Justin Dauwels

Abstract Background Universities, as innovation drivers in science and technology worldwide, should be leading the Great Transformation towards a carbon–neutral society and many have indeed picked up the challenge. However, only a small number of universities worldwide are collecting and publishing their carbon footprints, and some of them have defined zero emission targets. Unfortunately, there is limited consistency between the reported carbon footprints (CFs) because of different analysis methods, different impact measures, and different target definitions by the respective universities. Results Comprehensive CF data of 20 universities from around the globe were collected and analysed. Essential factors contributing to the university CF were identified. For the first time, CF data from universities were not only compared. The CF data were also evaluated, partly corrected, and augmented by missing contributions, to improve the consistency and comparability. The CF performance of each university in the respective year is thus homogenized, and measured by means of two metrics: CO2e emissions per capita and per m2 of constructed area. Both metrics vary by one order of magnitude across the different universities in this study. However, we identified ten universities reaching a per capita carbon footprint of lower than or close to 1.0 Mt (metric tons) CO2e/person and year (normalized by the number of people associated with the university), independent from the university’s size. In addition to the aforementioned two metrics, we suggested a new metric expressing the economic efficiency in terms of the CF per $ expenditures and year. We next aggregated the results for all three impact measures, arriving at an overall carbon performance for the respective universities, which we found to be independent of geographical latitude. Instead the per capita measure correlates with the national per capita CFs, and it reaches on average 23% of the national impacts per capita. The three top performing universities are located in Switzerland, Chile, and Germany. Conclusion The usual reporting of CO2 emissions is categorized into Scopes 1–3 following the GHG Protocol Corporate Accounting Standard which makes comparison across universities challenging. In this study, we attempted to standardize the CF metrics, allowing us to objectively compare the CF at several universities. From this study, we observed that, almost 30 years after the Earth Summit in Rio de Janeiro (1992), the results are still limited. Only one zero emission university was identified, and hence, the transformation should speed up globally.


Author(s):  
Pablo Ponzio ◽  
Ariel Godio ◽  
Nicolás Rosner ◽  
Marcelo Arroyo ◽  
Nazareno Aguirre ◽  
...  

AbstractSoftware model checkers are able to exhaustively explore different bounded program executions arising from various sources of non-determinism. These tools provide statements to produce non-deterministic values for certain variables, thus forcing the corresponding model checker to consider all possible values for these during verification. While these statements offer an effective way of verifying programs handling basic data types and simple structured types, they are inappropriate as a mechanism for nondeterministic generation of pointers, favoring the use of insertion routines to produce dynamic data structures when verifying, via model checking, programs handling such data types.We present a technique to improve model checking of programs handling heap-allocated data types, by taming the explosion of candidate structures that can be built when non-deterministically initializing heap object fields. The technique exploits precomputed relational bounds, that disregard values deemed invalid by the structure’s type invariant, thus reducing the state space to be explored by the model checker. Precomputing the relational bounds is a challenging costly task too, for which we also present an efficient algorithm, based on incremental SAT solving.We implement our approach on top of the bounded model checker, and show that, for a number of data structures implementations, we can handle significantly larger input structures and detect faults that is unable to detect.


Sign in / Sign up

Export Citation Format

Share Document