scholarly journals Order-sorted Homeomorphic Embedding Modulo Combinations of Associativity and/or Commutativity Axioms*

2020 ◽  
Vol 177 (3-4) ◽  
pp. 297-329
Author(s):  
María Alpuente ◽  
Angel Cuenca-Ortega ◽  
Santiago Escobar ◽  
José Meseguer

The Homeomorphic Embedding relation has been amply used for defining termination criteria of symbolic methods for program analysis, transformation, and verification. However, homeomorphic embedding has never been investigated in the context of order-sorted rewrite theories that support symbolic execution methods modulo equational axioms. This paper generalizes the symbolic homeomorphic embedding relation to order–sorted rewrite theories that may contain various combinations of associativity and/or commutativity axioms for different binary operators. We systematically measure the performance of different, increasingly efficient formulations of the homeomorphic embedding relation modulo axioms that we implement in Maude. Our experimental results show that the most efficient version indeed pays off in practice.

Author(s):  
Marcos Lordello Chaim ◽  
Daniel Soares Santos ◽  
Daniela Soares Cruzes

Buffer overflow (BO) is a well-known and widely exploited security vulnerability. Despite the extensive body of research, BO is still a threat menacing security-critical applications. The authors present a comprehensive systematic review on techniques intended to detecting BO vulnerabilities before releasing a software to production. They found that most of the studies addresses several vulnerabilities or memory errors, being not specific to BO detection. The authors organized them in seven categories: program analysis, testing, computational intelligence, symbolic execution, models, and code inspection. Program analysis, testing and code inspection techniques are available for use by the practitioner. However, program analysis adoption is hindered by the high number of false alarms; testing is broadly used but in ad hoc manner; and code inspection can be used in practice provided it is added as a task of the software development process. New techniques combining object code analysis with techniques from different categories seem a promising research avenue towards practical BO detection.


2017 ◽  
Vol 8 (2) ◽  
pp. 59-75
Author(s):  
Xu-zhou Zhang ◽  
Yun-zhan Gong ◽  
Ya-Wen Wang

Static program analysis is a strong technique for analyzing program behavior, but suffers from scalability problem, such as path explosion which is caused by the presence of loops and function calls. This article applies the selective execution mechanism and heuristic strategy on exploring paths through loops. This combinatorial strategy tries to alleviate the path explosion problem from three aspects: 1) exploring loops with different approaches according to their relative position to a specific target; 2) combining static analysis, dynamic execution, and symbolic execution to deal with the separated program; 3) applying a heuristic strategy on offering guidance for the path exploration. These approaches are integrated to automatically generate paths for specified targets in loop structure. Experimental results show that the authors' proposed strategy is available for combination of different loops. It outperforms some existing techniques on achieving better coverage for programs containing loops, and is applicable in engineering.


Author(s):  
BISHOKSAN KAFLE ◽  
GRAEME GANGE ◽  
PETER J. STUCKEY ◽  
PETER SCHACHTE ◽  
HARALD SØNDERGAARD

Abstract Precondition inference is a non-trivial problem with important applications in program analysis and verification. We present a novel iterative method for automatically deriving preconditions for the safety and unsafety of programs. Each iteration maintains over-approximations of the set of safe and unsafe initial states, which are used to partition the program’s initial states into those known to be safe, known to be unsafe and unknown. We then construct revised programs with those unknown initial states and iterate the procedure until the approximations are disjoint or some termination criteria are met. An experimental evaluation of the method on a set of software verification benchmarks shows that it can infer precise preconditions (sometimes optimal) that are not possible using previous methods.


10.29007/qwd8 ◽  
2018 ◽  
Author(s):  
Jens Katelaan ◽  
Christoph Matheja ◽  
Thomas Noll ◽  
Florian Zuleger

In this tool paper we present Harrsh – a tool for unified reasoning about symbolic-heap separation logic. Harrsh supports the analysis of robustness properties of the symbolic heap fragment of separation logic with user-defined inductive predicates. Robustness properties, such as satisfiability, reachability, and acyclicity, are important for a wide range of reasoning tasks in automated program analysis and verification based on separation logic. Harrsh makes use of heap automata, which offer a generic approach to reasoning about robustness properties. We report on experimental results for several robustness properties taken from the literature and compare against satisfiability checkers participating in a recent competition. We conclude that a generic approach to checking robustness is feasible and promising for the extension to further properties of interest.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-30
Author(s):  
Yannis Smaragdakis ◽  
Neville Grech ◽  
Sifis Lagouvardos ◽  
Konstantinos Triantafyllou ◽  
Ilias Tsatiris

We present a static analysis approach that combines concrete values and symbolic expressions. This symbolic value-flow (“symvalic”) analysis models program behavior with high precision, e.g., full path sensitivity. To achieve deep modeling of program semantics, the analysis relies on a symbiotic relationship between a traditional static analysis fixpoint computation and a symbolic solver: the solver does not merely receive a complex “path condition” to solve, but is instead invoked repeatedly (often tens or hundreds of thousands of times), in close cooperation with the flow computation of the analysis. The result of the symvalic analysis architecture is a static modeling of program behavior that is much more complete than symbolic execution, much more precise than conventional static analysis, and domain-agnostic: no special-purpose definition of anti-patterns is necessary in order to compute violations of safety conditions with high precision. We apply the analysis to the domain of Ethereum smart contracts. This domain represents a fundamental challenge for program analysis approaches: despite numerous publications, research work has not been effective at uncovering vulnerabilities of high real-world value. In systematic comparison of symvalic analysis with past tools, we find significantly increased completeness (shown as 83-96% statement coverage and more true error reports) combined with much higher precision, as measured by rate of true positive reports. In terms of real-world impact, since the beginning of 2021, the analysis has resulted in the discovery and disclosure of several critical vulnerabilities, over funds in the many millions of dollars. Six separate bug bounties totaling over $350K have been awarded for these disclosures.


2013 ◽  
Vol 718-720 ◽  
pp. 2314-2317
Author(s):  
Yi Wang ◽  
Tong Hai Jiang ◽  
Jun Dong

Symbolic execution is a promising approach for software analyzing and testing, but it still suffers from scalability issues, in which a significant challenge is how to handle loop caused path explosion. This paper proposes a new approach to mitigate the scalability problem brought by loops in symbolic execution. By reusing the previously calculated results captured from satisfiability verification process, we implement a fast loop boundary coverage test strategy which can eliminate a large number of redundant paths and unnecessary solver invocations. Experimental results on Symbolic (Java) PathFinder show that our strategy can greatly improve the efficiency of testing for programs having loops.


Author(s):  
Herbert Rocha ◽  
Rafael Menezes ◽  
Lucas C. Cordeiro ◽  
Raimundo Barreto

Abstract Map2Check is a software verification tool that combines fuzzing, symbolic execution, and inductive invariants. It automatically checks safety properties in C programs by adopting source code instrumentation to monitor data (e.g., memory pointers) from the program’s executions using LLVM compiler infrastructure. For SV-COMP 2020, we extended Map2Check to exploit an iterative deepening approach using LibFuzzer and Klee to check for safety properties. We also use Crab-LLVM to infer program invariants based on reachability analysis. Experimental results show that Map2Check can handle a wide variety of safety properties in several intricate verification tasks from SV-COMP 2020.


2020 ◽  
Author(s):  
Maria Paquin

Benchmark programs are an integral part of program analysis research. Researchers use benchmark programs to evaluate existing techniques and test the feasibility of new approaches. The larger and more realistic the set of benchmarks, the more confident a researcher can be about the correctness and reproducibility of their results. However, obtaining an adequate set of benchmark programs has been a long-standing challenge in the program analysis community. In this thesis, we present the APT tool, a framework we designed and implemented to automate the generation of realistic benchmark programs suitable for program analysis evaluations. Our tool targets intra-procedural analyses that operate on an integer domain, specifically symbolic execution. The framework is composed of three main stages. In the first stage, the tool extracts potential benchmark programs from open-source repositories suitable for symbolic execution. In the second stage, the tool transforms the extracted programs into compilable, stand-alone benchmarks by removing external dependencies and nonlinear expressions. In the third stage, the benchmarks are verified and made available for the user. We have designed our transformation algorithms to remove program dependencies and nonlinear expressions while preserving their semantics-equivalence in the abstraction of symbolic analysis. That is, we want the information the analysis computes on the original program and its transformed version to be equivalent. Our work provides static analysis researchers with concise, compilable benchmark programs that are relevant to symbolic execution, allowing them to focus their efforts on advancing analysis techniques. Furthermore, our work benefits the software engineering community by enabling static analysis researchers to perform benchmarking with a large, realistic set of programs, thus strengthening the empirical evidence of the advancements in static program analysis.


2021 ◽  
pp. 1-17
Author(s):  
Jim Prentzas ◽  
Ioannis Hatzilygeroudis

Neuro-symbolic approaches combine neural and symbolic methods. This paper explores aspects regarding the reasoning mechanisms of two neuro-symbolic approaches, that is, neurules and connectionist expert systems. Both provide reasoning and explanation facilities. Neurules are a type of neuro-symbolic rules tightly integrating the neural and symbolic components, giving pre-eminence to the symbolic component. Connectionist expert systems give pre-eminence to the connectionist component. This paper explores reasoning aspects about neurules and connectionist expert systems that have not been previously addressed. As far as neurules are concerned, an aspect playing a role in conflict resolution (i.e., order of neurules) is explored. Experimental results show an improvement in reasoning efficiency. As far as connectionist expert systems are concerned, variations of the reasoning mechanism are explored. Experimental results are presented for them as well showing that one of the variations generally performs better than the others.


Sign in / Sign up

Export Citation Format

Share Document