scholarly journals Formally Validating a Practical Verification Condition Generator

Author(s):  
Gaurav Parthasarathy ◽  
Peter Müller ◽  
Alexander J. Summers

AbstractA program verifier produces reliable results only if both the logic used to justify the program’s correctness is sound, and the implementation of the program verifier is itself correct. Whereas it is common to formally prove soundness of the logic, the implementation of a verifier typically remains unverified. Bugs in verifier implementations may compromise the trustworthiness of successful verification results. Since program verifiers used in practice are complex, evolving software systems, it is generally not feasible to formally verify their implementation.In this paper, we present an alternative approach: we validate successful runs of the widely-used Boogie verifier by producing a certificate which proves correctness of the obtained verification result. Boogie performs a complex series of program translations before ultimately generating a verification condition whose validity should imply the correctness of the input program. We show how to certify three of Boogie’s core transformation phases: the elimination of cyclic control flow paths, the (SSA-like) replacement of assignments by assumptions using fresh variables (passification), and the final generation of verification conditions. Similar translations are employed by other verifiers. Our implementation produces certificates in Isabelle, based on a novel formalisation of the Boogie language.

Algorithms ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 335
Author(s):  
Hongwei Wei ◽  
Guanjun Lin ◽  
Lin Li ◽  
Heming Jia

Exploitable vulnerabilities in software systems are major security concerns. To date, machine learning (ML) based solutions have been proposed to automate and accelerate the detection of vulnerabilities. Most ML techniques aim to isolate a unit of source code, be it a line or a function, as being vulnerable. We argue that a code segment is vulnerable if it exists in certain semantic contexts, such as the control flow and data flow; therefore, it is important for the detection to be context aware. In this paper, we evaluate the performance of mainstream word embedding techniques in the scenario of software vulnerability detection. Based on the evaluation, we propose a supervised framework leveraging pre-trained context-aware embeddings from language models (ELMo) to capture deep contextual representations, further summarized by a bidirectional long short-term memory (Bi-LSTM) layer for learning long-range code dependency. The framework takes directly a source code function as an input and produces corresponding function embeddings, which can be treated as feature sets for conventional ML classifiers. Experimental results showed that the proposed framework yielded the best performance in its downstream detection tasks. Using the feature representations generated by our framework, random forest and support vector machine outperformed four baseline systems on our data sets, demonstrating that the framework incorporated with ELMo can effectively capture the vulnerable data flow patterns and facilitate the vulnerability detection task.


2005 ◽  
Vol 12 (4) ◽  
pp. 505-513 ◽  
Author(s):  
C. Del Negro ◽  
L. Fortuna ◽  
A. Vicari

Abstract. The forecasting of lava flow paths is a complex problem in which temperature, rheology and flux-rate all vary with space and time. The problem is more difficult to solve when lava runs down a real topography, considering that the relations between characteristic parameters of flow are typically nonlinear. An alternative approach to this problem that does not use standard differential equation methods is Cellular Nonlinear Networks (CNNs). The CNN paradigm is a natural and flexible framework for describing locally interconnected, simple, dynamic systems that have a lattice-like structure. They consist of arrays of essentially simple, nonlinearly coupled dynamic circuits containing linear and non-linear elements able to process large amounts of information in real time. Two different approaches have been implemented in simulating some lava flows. Firstly, a typical technique of the CNNs to analyze spatio-temporal phenomena (as Autowaves) in 2-D and in 3-D has been utilized. Secondly, the CNNs have been used as solvers of partial differential equations of the Navier-Stokes treatment of Newtonian flow.


2021 ◽  
Author(s):  
◽  
Johann Nortje

<p>This thesis presents the design of a real-time visual performance system for live performances. Building on a research analysis of historical context and precedents, it is evident that software systems currently available to Live Cinema and VJ performers are often complex to navigate and counter intuitive to perform with. An alternative approach to visual performance system design is investigated in this thesis, where the spatial zone of the physical performance is used as the basis for the design, rather than purely placing the focus on software architecture. The investigation focuses on how the creation of live visual content can be achieved through the virtual and physical spatial relationships within the performance and how the performer then interacts with these relationships through bodily response and navigation. This is achieved through combining the successes of contemporary visual performances, the interaction techniques used in pre-cinema instrumentation and the use of projection mapping as a means of visually addressing the entire space of the performance. These investigations are demonstrated through a series of experiments and theoretical studies culminating in a set of design criteria, put together in a final system design accompanied by a demonstrative performance. The significance of this research is to provide the design basis for a successfully intuitive visual performance instrument, which can provide immediate results yet still require skill and experience to master. This will move the skill base of visual performance away from software navigation and more towards the physical ability to create and perform complex visual compositions in real time.</p>


2001 ◽  
Vol 148 (6) ◽  
pp. 175 ◽  
Author(s):  
I. Hayes ◽  
C. Fidge ◽  
K. Lermer
Keyword(s):  

Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. WA225-WA232 ◽  
Author(s):  
Emily B. Voytek ◽  
Caitlin R. Rushlow ◽  
Sarah E. Godsey ◽  
Kamini Singha

Shallow subsurface flow is a dominant process controlling hillslope runoff generation, soil development, and solute reaction and transport. Despite their importance, the location and geometry of these flow paths are difficult to determine. In arctic environments, shallow subsurface flow paths are limited to a thin zone of seasonal thaw above permafrost, which is traditionally assumed to mimic the surface topography. We have used a combined approach of electrical resistivity tomography (ERT) and self-potential (SP) measurements to map shallow subsurface flow paths in and around water tracks, drainage features common to arctic hillslopes. ERT measurements delineate thawed zones in the subsurface that control flow paths, whereas SP is sensitive to groundwater flow. We have found that areas of low electrical resistivity in the water tracks were deeper than manual thaw depth estimates and varied from the surface topography. This finding suggests that traditional techniques might underestimate active-layer thaw and the extent of the flow path network on arctic hillslopes. SP measurements identify complex 3D flow paths in the thawed zone. Our results lay the groundwork for investigations into the seasonal dynamics, hydrologic connectivity, and climate sensitivity of spatially distributed flow path networks on arctic hillslopes.


2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Misael Mongiovì ◽  
Andrea Fornaia ◽  
Emiliano Tramontana

Abstract The availability of effective test suites is critical for the development and maintenance of reliable software systems. To increase test effectiveness, software developers tend to employ larger and larger test suites. The recent availability of software tools for automatic test generation makes building large test suites affordable, therefore contributing to accelerating this trend. However, large test suites, though more effective, are resources and time consuming and therefore cannot be executed frequently. Reducing them without decreasing code coverage is a needed compromise between efficiency and effectiveness of the test, hence enabling a more regular check of the software under development. We propose a novel approach, namely REDUNET, to reduce a test suite while keeping the same code coverage. We integrate this approach in a complete framework for the automatic generation of efficient and effective test suites, which includes test suite generation, code coverage analysis, and test suite reduction. Our approach formulates the test suite reduction as a set cover problem and applies integer linear programming and a network-based optimisation, which takes advantage of the properties of the control flow graph. We find the optimal set of test cases that keeps the same code coverage in fractions of seconds on real software projects and test suites generated automatically by Randoop. The results on ten real software systems show that the proposed approach finds the optimal minimisation and achieves up to 90% reduction and more than 50% reduction on all systems under analysis. On the largest project our reduction algorithm performs more than three times faster than both integer linear programming alone and the state-of-the-art heuristic Harrold Gupta Soffa.


Author(s):  
Simon Bliudze ◽  
Panagiotis Katsaros ◽  
Saddek Bensalem ◽  
Martin Wirsing

AbstractFull a posteriori verification of the correctness of modern software systems is practically infeasible due to the sheer complexity resulting from their intrinsic concurrent nature. An alternative approach consists of ensuring correctness by construction. We discuss the Rigorous System Design (RSD) approach, which relies on a sequence of semantics-preserving transformations to obtain an implementation of the system from a high-level model while preserving all the properties established along the way. In particular, we highlight some of the key requirements for the feasibility of such an approach, namely availability of (1) methods and tools for the design of correct-by-construction high-level models and (2) definition and proof of the validity of suitable domain-specific abstractions. We summarise the results of the extended versions of seven papers selected among those presented at the $$1\mathrm {st}$$ 1 st and the $$2\mathrm {nd}$$ 2 nd  International Workshops on Methods and Tools for Rigorous System Design (MeTRiD 2018–2019), indicating how they contribute to the advancement of the RSD approach.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Mourad Badri ◽  
Fadel Toure

The aim of this paper is to evaluate empirically the relationship between a new metric (Quality Assurance Indicator—Qi) and testability of classes in object-oriented systems. The Qi metric captures the distribution of the control flow in a system. We addressed testability from the perspective of unit testing effort. We collected data from five open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used different metrics to quantify the corresponding JUnit test cases. Classes were classified, according to the required testing effort, in two categories: high and low. In order to evaluate the capability of the Qi metric to predict testability of classes, we used the univariate logistic regression method. The performance of the predicted model was evaluated using Receiver Operating Characteristic (ROC) analysis. The results indicate that the univariate model based on the Qi metric is able to accurately predict the unit testing effort of classes.


Sign in / Sign up

Export Citation Format

Share Document