scholarly journals CoVEGI: Cooperative Verification via Externally Generated Invariants

Author(s):  
Jan Haltermann ◽  
Heike Wehrheim

AbstractSoftware verification has recently made enormous progress due to the development of novel verification methods and the speed-up of supporting technologies like SMT solving. To keep software verification tools up to date with these advances, tool developers keep on integrating newly designed methods into their tools, almost exclusively by re-implementing the method within their own framework. While this allows for a conceptual re-use of methods, it nevertheless requires novel implementations for every new technique.In this paper, we employ cooperative verification in order to avoid re-implementation and enable usage of novel tools as black-box components in verification. Specifically, cooperation is employed for the core ingredient of software verification which is invariant generation. Finding an adequate loop invariant is key to the success of a verification run. Our framework named CoVEGI allows a master verification tool to delegate the task of invariant generation to one or several specialized helper invariant generators. Their results are then utilized within the verification run of the master verifier, allowing in particular for crosschecking the validity of the invariant. We experimentally evaluate our framework on an instance with two masters and three different invariant generators using a number of benchmarks from SV-COMP 2020. The experiments show that the use of CoVEGI can increase the number of correctly verified tasks without increasing the used resources.

10.29007/prxp ◽  
2018 ◽  
Author(s):  
Jan Olaf Blech ◽  
Thanh-Hung Nguyen ◽  
Michael Perin

In this paper we present on-going work addressing the problem of automatically generating realistic and guaranteed correct invariants. Since invariant generation mechanisms are error-prone, after the computation of invariants by a verification tool, we formally prove that the generated invariants are indeed invariants of the considered systems using a higher-order theorem prover and automated techniques. We regard invariants for BIP models. BIP (behavior, interaction, priority) is a language for specifying asynchronous component based systems. Proving that an invariant holds often requires an induction on possible system execution traces. For this reason, apart from generating invariants that precisely capture a system’s behavior, inductiveness of invariants is an important goal. We establish a notion of robust BIP models. These can be automatically constructed from our original non-robust BIP models and over-approximate their behavior. We motivate that invariants of robust BIP models capture the behavior of systems in a more natural way than invariants of corresponding non-robust BIP models. Robust BIP models take imprecision due to values delivered by sensors into account. Invariants of robust BIP models tend to be inductive and are also invariants of the original non-robust BIP model. Therefore they may be used by our verification tools and it is easy to show their correctness in a higher-order theorem prover. The presented work is developed to verify the results of a deadlock-checking tool for embedded systems after their computations. Therewith, we gain confidence in the provided analysis results.


Author(s):  
Vladimir Anatolyevich Gratinskiy ◽  
Evgeny Mikhailovich Novikov ◽  
Ilja Sergeevich Zakharov

Verification tools can produce various kinds of results while checking programs against requirement specifications. Experts, who seek for errors and estimate completeness of verification, mostly appreciate verdicts, violation witnesses and code coverage reports. They need convenient tools for automating the assessment of verification results to apply verification tools in practice when many program configurations and versions are checked against various requirements. In this paper, we propose new methods for expert evaluation of verification results, covering all those problems that are most significant in accordance with our experience in verifying large programs for compliance with a large number of requirements specifications. Some ideas are borrowed from the areas of testing and static analysis. However, specific methods and technical solutions are unique, since the verification results provided by verification tools are either not found in other areas or have special semantics. The paper presents our approaches and their implementation in the Klever software verification framework.


Author(s):  
Ruiyang Song ◽  
Kuang Xu

We propose and analyze a temporal concatenation heuristic for solving large-scale finite-horizon Markov decision processes (MDP), which divides the MDP into smaller sub-problems along the time horizon and generates an overall solution by simply concatenating the optimal solutions from these sub-problems. As a “black box” architecture, temporal concatenation works with a wide range of existing MDP algorithms. Our main results characterize the regret of temporal concatenation compared to the optimal solution. We provide upper bounds for general MDP instances, as well as a family of MDP instances in which the upper bounds are shown to be tight. Together, our results demonstrate temporal concatenation's potential of substantial speed-up at the expense of some performance degradation.


2021 ◽  
Vol 9 (02) ◽  
pp. 95-99
Author(s):  
Abriza Mahandis Shama ◽  
Dian W. Chandra

PT. Emporia Digital Raya is an fintech company. The product include web and android application. However, in the deployment system, PT Emporia Digital Raya still uses an ancient system with a single vm system and only uses git for deployment to server. Even though at this time the deployment process of an application has grown very far. Therefore, in this study will created a system which is currently popular being used. This system is called DevSecOps. Devsecops will need a tools like Jenkins, Sonarqube, and Docker. The core of this system is the automation process where the deployment process is no longer done manually as before. With this system, it is hoped that will help speed up developer work and improve code quality.


The past few decades have seen large fluctuations in the perceived value of parallel computing. At times, parallel computation has optimistically been viewed as the solution to all of our computational limitations. The conventional division of verification methods is analyzed. It is concluded that synthetic methods of software verification can be considered as the most relevant, most useful and productive ones. It is noted that the implementation of the methods of formal verification of software of computer systems, which supplement the traditional methods of testing and debugging, and make it possible to improve the uptime and security of programs, is relevant. Methods of computer systems software formal verification can guarantee the check that verified properties are performed by system model. Nowadays, these methods are actively being developed in the direction of reducing the formal verification total cost, support of modern programming concepts and minimization of "manual" work in the transition from the system model to its implementation. Their main feature is an ability to search for errors using mathematical model, without recourse to existing realization of software. It is very convenient and economical. There are several specific techniques used for formal models analysis, such as deductive analysis, model and consistence check. Every verification method is been used in particular cases, depending on the goal. Synthetic methods of software verification are considered the most actual, useful and efficient, as they somehow try to combine the advantages of different verification approaches, getting rid of their drawbacks. Currently, there has been made significant progress in the development of such methods and their implementation in the practice of industrial software development.


Author(s):  
Shahram Rahimi ◽  
Rishath A. S. Rias ◽  
Elham S. Khorasani

The complexity of designing concurrent and highly-evolving interactive systems has grown to a point where system verification has become a hurdle. Fortunately, formal verification methods have arrived at the right time. They detect errors, inconsistencies and incompleteness at early development stages of a system formally modeled using a formal specification language. -calculus (Milner, 1999) is one such formal language which provides strong mathematical base that can be used for verifying system specifications. But manually verifying the specifications of concurrent systems is a very tedious and error-prone work, especially if the specifications are large. Consequently, an automated verification tool would be essential for efficient system design and development. In addition, formal verification tools are vital ingredient to fully harness the potential of component-based software composition. The authors developed such an automated verification tool which is highly portable and seamlessly integrates with the visualization, reduction and performance evaluation tools introduced (Ahmad & Rahimi, 2008; Rahimi, 2006; Rahimi et al., 2001, 2008) to provide a comprehensive tool for designing and analyzing multi process/agent systems. Open-Bisimulation (Sangiorgi, 1996) concept is utilized as the theoretical base for the design and implementation of the tool which incorporates an expert system implemented in Java Expert System Shell (JESS) (Friedman-Hill, 2003).


2018 ◽  
Vol 10 (1) ◽  
pp. 31-38 ◽  
Author(s):  
S. Karim ◽  
A. Biswas ◽  
A. Bosu ◽  
F. R. Laboni ◽  
A. S. Julie ◽  
...  

Present study aspires at the design of an immediate release formulation with prospective use of fexofenadine hydrochloride by exploring the effect of sodium starch glycolate as super disintegrant. Fexofenadine hydrochloride immediate release tablets (Formulations F-1, F-2, F-3, F-4 and F-5) using different ratios of sodium starch glycolate as a disintegrant were prepared by direct compression method. Standard physicochemical tests were performed for all the formulations. Dissolution studies of the formulations were done in phosphate buffer, pH 6.8 using USP apparatus II (paddle apparatus) at 50 rpm. Percent release of fexofenadine hydrochloride of formulations F-1, F-2, F-3, F-4 and F-5 were 89.98%, 90.98%, 92.95, 96.92% and 99.85%, respectively after 1 h and the release pattern followed the zero order kinetics. The release rate in the formulation F-5 was higher compared to other formulations and the studied market products. Sodium starch glycolate speed up the release of the drug from the core tablets, and the release of fexofenadine hydrochloride from tablets was directly proportional to the amount of sodium starch glycolate present in the formulations and there by produced immediate action.


2013 ◽  
Vol 7 (2) ◽  
pp. 57-85
Author(s):  
Khaoula Marzouki ◽  
Amira Radhouani ◽  
Narjes Ben Rajeb

Electronic voting protocols have many advantages over traditional voting but they are complex and subject to many kinds of attacks. Therefore, the use of formal verification methods is crucial to ensure some security properties. We propose to model a recent protocol of remote electronic voting in the applied Pi-calculus. We focalized on some security properties such as fairness which expresses the impossibility of obtaining partial results, eligibility which requires that only legitimate voters can vote, coercion resistance which ensures that no voter may vote under pressure, and verifiability which supposes that anyone can verify the accuracy of the final result. We proved either manually or using the automated verification tool ProVerif that the protocol satisfies these security properties.


Sign in / Sign up

Export Citation Format

Share Document