scholarly journals Leveraging Textual Specifications for Grammar-Based Fuzzing of Network Protocols

Author(s):  
Samuel Jero ◽  
Maria Leonor Pacheco ◽  
Dan Goldwasser ◽  
Cristina Nita-Rotaru

Grammar-based fuzzing is a technique used to find software vulnerabilities by injecting well-formed inputs generated following rules that encode application semantics. Most grammar-based fuzzers for network protocols rely on human experts to manually specify these rules. In this work we study automated learning of protocol rules from textual specifications (i.e. RFCs). We evaluate the automatically extracted protocol rules by applying them to a state-of-the-art fuzzer for transport protocols and show that it leads to a smaller number of test cases while finding the same attacks as the system that uses manually specified rules.

Author(s):  
Jaymie Strecker ◽  
Atif M. Memon

This chapter describes the state of the art in testing GUI-based software. Traditionally, GUI testing has been performed manually or semimanually, with the aid of capture- replay tools. Since this process may be too slow and ineffective to meet the demands of today’s developers and users, recent research in GUI testing has pushed toward automation. Model-based approaches are being used to generate and execute test cases, implement test oracles, and perform regression testing of GUIs automatically. This chapter shows how research to date has addressed the difficulties of testing GUIs in today’s rapidly evolving technological world, and it points to the many challenges that lie ahead.


2019 ◽  
Vol 3 (4) ◽  
pp. 382-396 ◽  
Author(s):  
Ioannis Karageorgos ◽  
Mehmet M. Isgenc ◽  
Samuel Pagliarini ◽  
Larry Pileggi

AbstractIn today’s globalized integrated circuit (IC) ecosystem, untrusted foundries are often procured to build critical systems since they offer state-of-the-art silicon with the best performance available. On the other hand, ICs that originate from trusted fabrication cannot match the same performance level since trusted fabrication is often available on legacy nodes. Split-Chip is a dual-IC approach that leverages the performance of an untrusted IC and combines it with the guaranties of a trusted IC. In this paper, we provide a framework for chip-to-chip authentication that can further improve a Split-Chip system by protecting it from attacks that are unique to Split-Chip. A hardware implementation that utilizes an SRAM-based PUF as an identifier and public key cryptography for handshake is discussed. Circuit characteristics are provided, where the trusted IC is designed in a 28-nm CMOS technology and the untrusted IC is designed in an also commercial 16-nm CMOS technology. Most importantly, our solution does not require a processor for performing any of the handshake or cryptography tasks, thus being not susceptible to software vulnerabilities and exploits.


2015 ◽  
Vol 8 (2) ◽  
pp. 205-220
Author(s):  
A. Praga ◽  
D. Cariolle ◽  
L. Giraud

Abstract. To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric advection model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach for advection was chosen to ensure mass preservation and to ease parallelization. To overcome the pole restriction on time steps for a regular latitude–longitude grid, Pangolin uses a quasi-area-preserving reduced latitude–longitude grid. The features of the regular grid are exploited to reduce the memory footprint and enable effective parallel performances. In addition, a custom domain decomposition algorithm is presented. To assess the validity of the advection scheme, its results are compared with state-of-the-art models on algebraic test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred cores.


Author(s):  
Yixiong Chen ◽  
Yang Yang ◽  
Zhanyao Lei ◽  
Mingyuan Xia ◽  
Zhengwei Qi

AbstractModern RESTful services expose RESTful APIs to integrate with diversified applications. Most RESTful API parameters are weakly typed, which greatly increases the possible input value space. This poses difficulties for automated testing tools to generate effective test cases to reveal web service defects related to parameter validation. We call this phenomenon the type collapse problem. To remedy this problem, we introduce FET (Format-encoded Type) techniques, including the FET, the FET lattice, and the FET inference to model fine-grained information for API parameters. Enhanced by FET techniques, automated testing tools can generate targeted test cases. We demonstrate Leif, a trace-driven fuzzing tool, as a proof-of-concept implementation of FET techniques. Experiment results on 27 commercial services show that FET inference precisely captures documented parameter definitions, which helps Leif to discover 11 new bugs and reduce $$72\% \sim 86\%$$ 72 % ∼ 86 % fuzzing time as compared to state-of-the-art fuzzers.


Author(s):  
Roman M. Janssen ◽  
Henk Jansen ◽  
Jan-Willem van Wingerden

A novel frequency domain identification (FDI) strategy for the identification of radiation force models from frequency domain hydrodynamic data is proposed. First, a subspace identification method is augmented with a convex constraint that guarantees a stable solution. Then, in a second convex optimization problem, constraints on low- and high frequency asymptotic behavior and passivity are enforced. This novel method, constrained frequency domain subspace identification (CFDSI), is validated by comparing both SISO and MIMO CFDSI results with the state-of-the-art FDI toolbox, which is part of the Marine Systems Simulator MATLAB toolbox. In two test cases, it is shown that the novel algorithm can successfully identify a model with either a SISO or MIMO structure, where stability, passivity and the desired low- and high-frequency asymptotic behavior are guaranteed. For the two test cases presented, the quality of the CFDSI models matches the quality of the state-of-the-art FDI models.


Author(s):  
Peter E. Klauser

The friction wedge is a critical component in the three-piece truck. This paper describes the current approach for modeling friction wedges and compares its implementation in the commercially available NUCARS™ and VAMPIRE® vehicle dynamics codes. NUCARS™ is a software package developed by Transportation Technology Center, Inc., while VAMPIRE® is a package developed by AEA Technology plc. Sample results from both codes are presented based on standalone test cases. Shortcomings of the “state-of-the-art” model are described and directions for future work are proposed.


Author(s):  
Leonardo Lamanna ◽  
Alessandro Saetti ◽  
Luciano Serafini ◽  
Alfonso Gerevini ◽  
Paolo Traverso

The automated learning of action models is widely recognised as a key and compelling challenge to address the difficulties of the manual specification of planning domains. Most state-of-the-art methods perform this learning offline from an input set of plan traces generated by the execution of (successful) plans. However, how to generate informative plan traces for learning action models is still an open issue. Moreover, plan traces might not be available for a new environment. In this paper, we propose an algorithm for learning action models online, incrementally during the execution of plans. Such plans are generated to achieve goals that the algorithm decides online in order to obtain informative plan traces and reach states from which useful information can be learned. We show some fundamental theoretical properties of the algorithm, and we experimentally evaluate the online learning of the action models over a large set of IPC domains.


Author(s):  
David G. Gregory-Smith

The ERCOFTAC (European Research Community on Flow, Turbulence and Combustion) Seminar and Workshop was held with the aim of sharing between academic and industrial organisations information on the state of the art of 3D flow calculations for turbomachines. An important objective was the educational element for both established workers and new researches in the area. The philosophy was one of openness and sharing of both successes and problems. Four test cases were selected from the open literature covering a range of turbomachinery configurations. Five review lectures were given to provide a background for the discussion of the computational results. The workshop sessions indicated the importance of ensuring numerical accuracy, the need for future work particularly in turbulence and transition modelling, and the possibilities of adaptive and unstructured grids. Industrial participation was rather low and the problems in allocating resources by an organisation for this sort of exercise are recognised, and should be carefully considered for any future similar event.


Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 287
Author(s):  
Naresh Patnana ◽  
Swapnajit Pattnaik ◽  
Tarun Varshney ◽  
Vinay Pratap Singh

In this investigation, self-learning salp swarm optimization (SLSSO) based proportional- integral-derivative (PID) controllers are proposed for a Doha reverse osmosis desalination plant. Since the Doha reverse osmosis plant (DROP) is interacting with a two-input-two-output (TITO) system, a decoupler is designed to nullify the interaction dynamics. Once the decoupler is designed properly, two PID controllers are tuned for two non-interacting loops by minimizing the integral-square-error (ISE). The ISEs for two loops are obtained in terms of alpha and beta parameters to simplify the simulation. Thus designed ISEs are minimized using SLSSO algorithm. In order to show the effectiveness of the proposed algorithm, the controller tuning is also accomplished using some state-of-the-art algorithms. Further, statistical analysis is presented to prove the effectiveness of SLSSO. In addition, the time domain specifications are presented for different test cases. The step responses are also shown for fixed and variable reference inputs for two loops. The quantitative and qualitative results presented show the effectiveness of SLSSO for the DROP system.


Symmetry ◽  
2019 ◽  
Vol 11 (11) ◽  
pp. 1400 ◽  
Author(s):  
A. D. Shrivathsan ◽  
K. S. Ravichandran ◽  
R. Krishankumar ◽  
V. Sangeetha ◽  
Samarjit Kar ◽  
...  

Systematic Regression Testing is essential for maintaining software quality, but the cost of regression testing is high. Test case prioritization (TCP) is a widely used approach to reduce this cost. Many researchers have proposed regression test case prioritization techniques, and clustering is one of the popular methods for prioritization. The task of selecting appropriate test cases and identifying faulty functions involves ambiguities and uncertainties. To alleviate the issue, in this paper, two fuzzy-based clustering techniques are proposed for TCP using newly derived similarity coefficient and dominancy measure. Proposed techniques adopt grouping technology for clustering and the Weighted Arithmetic Sum Product Assessment (WASPAS) method for ranking. Initially, test cases are clustered using similarity//dominancy measures, which are later prioritized using the WASPAS method under both inter- and intra-perspectives. The proposed algorithms are evaluated using real-time data obtained from Software-artifact Infrastructure Repository (SIR). On evaluation, it is inferred that the proposed algorithms increase the likelihood of selecting more relevant test cases when compared to the recent state-of-the-art techniques. Finally, the strengths of the proposed algorithms are discussed in comparison with state-of-the-art techniques.


Sign in / Sign up

Export Citation Format

Share Document