Fundamental Approaches to Software Engineering - Lecture Notes in Computer Science
Latest Publications


TOTAL DOCUMENTS

20
(FIVE YEARS 20)

H-INDEX

1
(FIVE YEARS 1)

Published By Springer International Publishing

9783030714994, 9783030715007

Author(s):  
Yixiong Chen ◽  
Yang Yang ◽  
Zhanyao Lei ◽  
Mingyuan Xia ◽  
Zhengwei Qi

AbstractModern RESTful services expose RESTful APIs to integrate with diversified applications. Most RESTful API parameters are weakly typed, which greatly increases the possible input value space. This poses difficulties for automated testing tools to generate effective test cases to reveal web service defects related to parameter validation. We call this phenomenon the type collapse problem. To remedy this problem, we introduce FET (Format-encoded Type) techniques, including the FET, the FET lattice, and the FET inference to model fine-grained information for API parameters. Enhanced by FET techniques, automated testing tools can generate targeted test cases. We demonstrate Leif, a trace-driven fuzzing tool, as a proof-of-concept implementation of FET techniques. Experiment results on 27 commercial services show that FET inference precisely captures documented parameter definitions, which helps Leif to discover 11 new bugs and reduce $$72\% \sim 86\%$$ 72 % ∼ 86 % fuzzing time as compared to state-of-the-art fuzzers.


Author(s):  
Jyoti Prakash ◽  
Abhishek Tiwari ◽  
Christian Hammer

AbstractStatic analysis frameworks, such as Soot and Wala, are used by researchers to prototype and compare program analyses. These frameworks vary on heap abstraction, modeling library classes, and underlying intermediate program representation (IR). Often, these variations pose a threat to the validity of the results as the implications of comparing the same analysis implementation in different frameworks are still unexplored. Earlier studies have focused on the precision, soundness, and recall of the algorithms implemented in these frameworks; however, little to no work has been done to evaluate the effects of program representation. In this work, we fill this gap and study the impact of program representation on pointer analysis. Unfortunately, existing metrics are insufficient for such a comparison due to their inability to isolate each aspect of the program representation. Therefore, we define two novel metrics that measure these analyses’ precision after isolating the influence of class-hierarchy and intermediate representation. Our results establish that the minor differences in the class hierarchy and IR do not impact program analysis significantly. Besides, they reveal the sources of unsoundness that aid researchers in developing program analysis.


Author(s):  
Kaled M. Alshmrany ◽  
Rafael S. Menezes ◽  
Mikhail R. Gadelha ◽  
Lucas C. Cordeiro

AbstractWe describe and evaluate a novel white-box fuzzer for C programs named , which combines fuzzing and symbolic execution, and applies Bounded Model Checking (BMC) to find security vulnerabilities in C programs. explores and analyzes C programs (1) to find execution paths that lead to property violations and (2) to incrementally inject labels to guide the fuzzer and the BMC engine to produce test-cases for code coverage. successfully participates in Test-Comp’21 and achieves first place in the category and second place in the category.


Author(s):  
Christian Lidström ◽  
Dilian Gurov

AbstractWhen developing complex software and systems, contracts provide a means for controlling the complexity by dividing the responsibilities among the components of the system in a hierarchical fashion. In specific application areas, dedicated contract theories formalise the notion of contract and the operations on contracts in a manner that supports best the development of systems in that area. At the other end, contract meta-theories attempt to provide a systematic view on the various contract theories by axiomatising their desired properties. However, there exists a noticeable gap between the most well-known contract meta-theory of Benveniste et al. [5], which focuses on the design of embedded and cyber-physical systems, and the established way of using contracts when developing general software, following Meyer’s design-by-contract methodology [18]. At the core of this gap appears to be the notion of procedure: while it is a central unit of composition in software development, the meta-theory does not suggest an obvious way of treating procedures as components.In this paper, we provide a first step towards a contract theory that takes procedures as the basic building block, and is at the same time an instantiation of the meta-theory. To this end, we propose an abstract contract theory for sequential programming languages with procedures, based on denotational semantics. We show that, on the one hand, the specification of contracts of procedures in Hoare logic, and their procedure-modular verification, can be cast naturally in the framework of our abstract contract theory. On the other hand, we also show our contract theory to fulfil the axioms of the meta-theory. In this way, we give further evidence for the utility of the meta-theory, and prepare the ground for combining our instantiation with other, already existing instantiations.


Author(s):  
Jan Haltermann ◽  
Heike Wehrheim

AbstractSoftware verification has recently made enormous progress due to the development of novel verification methods and the speed-up of supporting technologies like SMT solving. To keep software verification tools up to date with these advances, tool developers keep on integrating newly designed methods into their tools, almost exclusively by re-implementing the method within their own framework. While this allows for a conceptual re-use of methods, it nevertheless requires novel implementations for every new technique.In this paper, we employ cooperative verification in order to avoid re-implementation and enable usage of novel tools as black-box components in verification. Specifically, cooperation is employed for the core ingredient of software verification which is invariant generation. Finding an adequate loop invariant is key to the success of a verification run. Our framework named CoVEGI allows a master verification tool to delegate the task of invariant generation to one or several specialized helper invariant generators. Their results are then utilized within the verification run of the master verifier, allowing in particular for crosschecking the validity of the invariant. We experimentally evaluate our framework on an instance with two masters and three different invariant generators using a number of benchmarks from SV-COMP 2020. The experiments show that the use of CoVEGI can increase the number of correctly verified tasks without increasing the used resources.


Author(s):  
Aleksandar S. Dimovski ◽  
Sven Apel ◽  
Axel Legay

AbstractLifted (family-based) static analysis by abstract interpretation is capable of analyzing all variants of a program family simultaneously, in a single run without generating any of the variants explicitly. The elements of the underlying lifted analysis domain are tuples, which maintain one property per variant. Still, explicit property enumeration in tuples, one by one for all variants, immediately yields combinatorial explosion. This is particularly apparent in the case of program families that, apart from Boolean features, contain also numerical features with large domains, thus giving rise to astronomical configuration spaces.The key for an efficient lifted analysis is a proper handling of variability-specific constructs of the language (e.g., feature-based runtime tests and $$\texttt {\#if}$$ # if directives). In this work, we introduce a new symbolic representation of the lifted abstract domain that can efficiently analyze program families with numerical features. This makes sharing between property elements corresponding to different variants explicitly possible. The elements of the new lifted domain are constraint-based decision trees, where decision nodes are labeled with linear constraints defined over numerical features and the leaf nodes belong to an existing single-program analysis domain. To illustrate the potential of this representation, we have implemented an experimental lifted static analyzer, called SPLNum$$^2$$ 2 Analyzer, for inferring invariants of C programs. An empirical evaluation on BusyBox and on benchmarks from SV-COMP yields promising preliminary results indicating that our decision trees-based approach is effective and outperforms the baseline tuple-based approach.


Author(s):  
Ziyuan Zhong ◽  
Yuchi Tian ◽  
Baishakhi Ray

AbstractDeep Neural Networks (DNNs) are being deployed in a wide range of settings today, from safety-critical applications like autonomous driving to commercial applications involving image classifications. However, recent research has shown that DNNs can be brittle to even slight variations of the input data. Therefore, rigorous testing of DNNs has gained widespread attention.While DNN robustness under norm-bound perturbation got significant attention over the past few years, our knowledge is still limited when natural variants of the input images come. These natural variants, e.g., a rotated or a rainy version of the original input, are especially concerning as they can occur naturally in the field without any active adversary and may lead to undesirable consequences. Thus, it is important to identify the inputs whose small variations may lead to erroneous DNN behaviors. The very few studies that looked at DNN’s robustness under natural variants, however, focus on estimating the overall robustness of DNNs across all the test data rather than localizing such error-producing points. This work aims to bridge this gap.To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DeepRobust-W) and a black-box (DeepRobust-B) tool to automatically identify the non-robust points. Our evaluation of these methods on three DNN models spanning three widely used image classification datasets shows that they are effective in flagging points of poor robustness. In particular, DeepRobust-W and DeepRobust-B are able to achieve an F1 score of up to 91.4% and 99.1%, respectively. We further show that DeepRobust-W can be applied to a regression problem in a domain beyond image classification. Our evaluation on three self-driving car models demonstrates that DeepRobust-W is effective in identifying points of poor robustness with F1 score up to 78.9%.


Author(s):  
Yong-Jun Shin ◽  
Eunho Cho ◽  
Doo-Hwan Bae

AbstractProactive adaptation, in which the adaptation for a system’s reliable goal achievement is performed by predicting changes in the environment, is considered as an effective alternative to reactive adaptation, in which adaptation is performed after observing changes. When predicting the environmental changes, the prediction may be uncertain, so it is necessary to verify and confirm an adaptation’s consequences before execution. To resolve the uncertainty, probabilistic model checking (PMC) has been utilized for verification of adaptation tactics’ effects on the goal of a self-adaptive system (SAS). However, PMC-based approaches have limitations on the state-explosion problem of complex SAS model verification and the modeling languages supported by the model checkers. In this paper, to overcome the limitations of the PMC-based approaches, we propose an efficient Proactive Adaptation approach based on STAtistical model checking (PASTA). Our approach allows SASs to mitigate the uncertainty of the future environment, faster than the PMC-based approach, by producing statistically sufficient samples for verification of adaptation tactics based on statistical model checking (SMC) algorithms. We provide algorithmic processes, a reference architecture, and an open-source implementation skeleton of PASTA for engineers to apply it for SAS development. We evaluate PASTA on two SASs using actual data and show that PASTA is efficient comparing to the PMC-based approach. We also provide a comparative analysis of the advantages and disadvantages of PMC- and SMC-based proactive adaptation to guide engineers’ decision-making for SAS development.


Author(s):  
Lucas Sakizloglou ◽  
Matthias Barkowsky ◽  
Holger Giese

AbstractStructural runtime models provide a snapshot of the constituents of a system and their state. Capturing the history of runtime models, i.e., previous snapshots, has been shown to be useful for a number of aims. Handling, however, history at runtime poses important challenges to tool support. We present the InTempo tool which is based on the Eclipse Modeling Framework and encodes runtime models as graphs. Key features of InTempo, such as, the integration of temporal requirements into graph queries, the in-memory storage of the model, and a systematic method to contain the model’s memory consumption, intend to address issues which seemingly place limitations on the available tool support. InTempo offers two operation modes which support both runtime and postmortem application scenarios.


Author(s):  
Maria Maximova ◽  
Sven Schneider ◽  
Holger Giese

AbstractThe analysis of behavioral models is of high importance for cyber-physical systems, as the systems often encompass complex behavior based on e.g. concurrent components with mutual exclusion or probabilistic failures on demand. The rule-based formalism of probabilistic timed graph transformation systems is a suitable choice when the models representing states of the system can be understood as graphs and timed and probabilistic behavior is important. However, model checking PTGTSs is limited to systems with rather small state spaces.We present an approach for the analysis of large-scale systems modeled as probabilistic timed graph transformation systems by systematically decomposing their state spaces into manageable fragments. To obtain qualitative and quantitative analysis results for a large-scale system, we verify that results obtained for its fragments serve as overapproximations for the corresponding results of the large-scale system. Hence, our approach allows for the detection of violations of qualitative and quantitative safety properties for the large-scale system under analysis. We consider a running example in which we model shuttles driving on tracks of a large-scale topology and for which we verify that shuttles never collide and are unlikely to execute emergency brakes. In our evaluation, we apply an implementation of our approach to the running example.


Sign in / Sign up

Export Citation Format

Share Document