scholarly journals On the Use of Model Checking for the Bounded and Unbounded Verification  of Nonblocking Concurrent Data Structures

2021 ◽  
Author(s):  
◽  
David Friggens

<p>Concurrent data structure algorithms have traditionally been designed using locks to regulate the behaviour of interacting threads, thus restricting access to parts of the shared memory to only one thread at a time. Since locks can lead to issues of performance and scalability, there has been interest in designing so-called nonblocking algorithms that do not use locks. However, designing and reasoning about concurrent systems is difficult, and is even more so for nonblocking systems, as evidenced by the number of incorrect algorithms in the literature.  This thesis explores how the technique of model checking can aid the testing and verification of nonblocking data structure algorithms. Model checking is an automated verification method for finite state systems, and is able to produce counterexamples when verification fails. For verification, concurrent data structures are considered to be infinite state systems, as there is no bound on the number of interacting threads, the number of elements in the data structure, nor the number of possible distinct data values. Thus, in order to analyse concurrent data structures with model checking, we must either place finite bounds upon them, or employ an abstraction technique that will construct a finite system with the same properties. First, we discuss how nonblocking data structures can be best represented for model checking, and how to specify the properties we are interested in verifying. These properties are the safety property linearisability, and the progress properties wait-freedom, lock-freedom and obstructionfreedom. Second, we investigate using model checking for exhaustive testing, by verifying bounded (and hence finite state) instances of nonblocking data structures, parameterised by the number of threads, the number of distinct data values, and the size of storage memory (e.g. array length, or maximum number of linked list nodes). It is widely held, based on anecdotal evidence, that most bugs occur in small instances. We investigate the smallest bounds needed to falsify a number of incorrect algorithms, which supports this hypothesis. We also investigate verifying a number of correct algorithms for a range of bounds. If an algorithm can be verified for bounds significantly higher than the minimum bounds needed for falsification, then we argue it provides a high degree of confidence in the general correctness of the algorithm. However, with the available hardware we were not able to verify any of the algorithms to high enough bounds to claim such confidence.  Third, we investigate using model checking to verify nonblocking data structures by employing the technique of canonical abstraction to construct finite state representations of the unbounded algorithms. Canonical abstraction represents abstract states as 3-valued logical structures, and allows the initial coarse abstraction to be refined as necessary by adding derived predicates. We introduce several novel derived predicates and show how these allow linearisability to be verified for linked list based nonblocking stack and queue algorithms. This is achieved within the standard canonical abstraction framework, in contrast to recent approaches that have added extra abstraction techniques on top to achieve the same goal.  The finite state systems we construct using canonical abstraction are still relatively large, being exponential in the number of distinct abstract thread objects. We present an alternative application of canonical abstraction, which more coarsely collapses all threads in a state to be represented by a single abstract thread object. In addition, we define further novel derived predicates, and show that these allow linearisability to be verified for the same stack and queue algorithms far more efficiently.</p>

2021 ◽  
Author(s):  
◽  
David Friggens

<p>Concurrent data structure algorithms have traditionally been designed using locks to regulate the behaviour of interacting threads, thus restricting access to parts of the shared memory to only one thread at a time. Since locks can lead to issues of performance and scalability, there has been interest in designing so-called nonblocking algorithms that do not use locks. However, designing and reasoning about concurrent systems is difficult, and is even more so for nonblocking systems, as evidenced by the number of incorrect algorithms in the literature.  This thesis explores how the technique of model checking can aid the testing and verification of nonblocking data structure algorithms. Model checking is an automated verification method for finite state systems, and is able to produce counterexamples when verification fails. For verification, concurrent data structures are considered to be infinite state systems, as there is no bound on the number of interacting threads, the number of elements in the data structure, nor the number of possible distinct data values. Thus, in order to analyse concurrent data structures with model checking, we must either place finite bounds upon them, or employ an abstraction technique that will construct a finite system with the same properties. First, we discuss how nonblocking data structures can be best represented for model checking, and how to specify the properties we are interested in verifying. These properties are the safety property linearisability, and the progress properties wait-freedom, lock-freedom and obstructionfreedom. Second, we investigate using model checking for exhaustive testing, by verifying bounded (and hence finite state) instances of nonblocking data structures, parameterised by the number of threads, the number of distinct data values, and the size of storage memory (e.g. array length, or maximum number of linked list nodes). It is widely held, based on anecdotal evidence, that most bugs occur in small instances. We investigate the smallest bounds needed to falsify a number of incorrect algorithms, which supports this hypothesis. We also investigate verifying a number of correct algorithms for a range of bounds. If an algorithm can be verified for bounds significantly higher than the minimum bounds needed for falsification, then we argue it provides a high degree of confidence in the general correctness of the algorithm. However, with the available hardware we were not able to verify any of the algorithms to high enough bounds to claim such confidence.  Third, we investigate using model checking to verify nonblocking data structures by employing the technique of canonical abstraction to construct finite state representations of the unbounded algorithms. Canonical abstraction represents abstract states as 3-valued logical structures, and allows the initial coarse abstraction to be refined as necessary by adding derived predicates. We introduce several novel derived predicates and show how these allow linearisability to be verified for linked list based nonblocking stack and queue algorithms. This is achieved within the standard canonical abstraction framework, in contrast to recent approaches that have added extra abstraction techniques on top to achieve the same goal.  The finite state systems we construct using canonical abstraction are still relatively large, being exponential in the number of distinct abstract thread objects. We present an alternative application of canonical abstraction, which more coarsely collapses all threads in a state to be represented by a single abstract thread object. In addition, we define further novel derived predicates, and show that these allow linearisability to be verified for the same stack and queue algorithms far more efficiently.</p>


Author(s):  
Tzu-Han Hsu ◽  
César Sánchez ◽  
Borzoo Bonakdarpour

AbstractThis paper introduces a bounded model checking (BMC) algorithm for hyperproperties expressed in HyperLTL, which — to the best of our knowledge — is the first such algorithm. Just as the classic BMC technique for LTL primarily aims at finding bugs, our approach also targets identifying counterexamples. BMC for LTL is reduced to SAT solving, because LTL describes a property via inspecting individual traces. Our BMC approach naturally reduces to QBF solving, as HyperLTL allows explicit and simultaneous quantification over multiple traces. We report on successful and efficient model checking, implemented in our tool called , of a rich set of experiments on a variety of case studies, including security, concurrent data structures, path planning for robots, and mutation testing.


2011 ◽  
Vol 55-57 ◽  
pp. 2192-2196 ◽  
Author(s):  
Zhi Quan Dai ◽  
Yong Guan ◽  
Sheng Zhen Jin ◽  
Zhi Ping Shi ◽  
Xiao Juan Li ◽  
...  

SpaceWire is a high-speed data transmission bus standard proposed by ESA for the aerospace applications. Hosted by the National Astronomical Observatories, Chinese Academy of Sciences, the Space Solar Telescope project takes the SpaceWire bus standard as space communication link. The SpaceWire communication circuit implemented by our group is a part of the SST project. In order to improve the fault detection and fault correction capacity and the reliability of the SpaceWire bus, we add a more error analysis and data storage module into the original six state modules of the standard protocol in the exchange layer.We adopt the formal verification method based on the model checking to verify the finite state machine of the SpaceWire control module. The properties expressed by the high-order temporal logic formula are verified automatically to be true by the SMV model verifier of the Cadence Company. Verification results show that our SpaceWire bus implementation and the additional error analysis and data storage module are faithful to the protocol specification. Therefore, we also can integrate SpaceWire bus circuit into the SST project.


Author(s):  
Ranjit Biswas

The data structure “r-Train” (“Train” in short) where r is a natural number is a new kind of powerful robust data structure that can store homogeneous data dynamically in a flexible way, in particular for large amounts of data. But a train cannot store heterogeneous data (by the term heterogeneous data, the authors mean data of various datatypes). In fact, the classical data structures (e.g., array, linked list, etc.) can store and handle homogeneous data only, not heterogeneous data. The advanced data structure “r-Atrain” (“Atrain” in short) is logically almost analogous to the data structure r-train (train) but with an advanced level of construction to accommodate heterogeneous data of large volumes. The data structure train can be viewed as a special case of the data structure atrain. It is important to note that none of these two new data structures is a competitor of the other. By default, any heterogeneous data structure can work as a homogeneous data structure too. However, for working with a huge volume of homogeneous data, train is more suitable than atrain. For working with heterogeneous data, atrain is suitable while train cannot be applicable. The natural number r is suitably predecided and fixed by the programmer depending upon the problem under consideration and also upon the organization/industry for which the problem is posed.


Author(s):  
JIGAR JAIN ◽  
Rushikesh Gaidhani ◽  
Pranjal Bahuguna

Non-blocking transactional structures are now been there for quite a while now. With the transactional structure we want to achieve atomicity of multiple operation of a transaction and consistency of the structure after rollback. Older solutions like software transactional memory (STM) and transactional boost provide synchronization at an external layer over the structure itself. This introduces an overhead which is not necessarily required. Thus, it is a probable problem. To solve this issue, researchers provided with a solution which make structural changes in the existing data structure and make it transactional, lock-free. In this work we present a sequential re-implantation of a the above provided solution. We applied transactional transformation to a linked-list and made a sequential version of it. Next, we introduced multi-resource lock version of the same which will used locking to support multi- thread operation on the linked list. As anyone would expect, we get constant time graph for a sequential version of the transactional structure. But the case for locked based multi thread version is different. We get worse performance as compared to sequential version as we get the overhead of maintain a resource vector. As the number of threads increase the size of resource vector increase and thus contention increases. In future we plan to completely re-implement a lock free transactional linked data structure and yet again compare its result with the results from this paper.


2021 ◽  
Vol 13 (4) ◽  
pp. 559
Author(s):  
Milto Miltiadou ◽  
Neill D. F. Campbell ◽  
Darren Cosker ◽  
Michael G. Grant

In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


Sign in / Sign up

Export Citation Format

Share Document