scholarly journals A Compile/Run-time Environment for the Automatic Transformation of Linked List Data Structures

2008 ◽  
Vol 36 (6) ◽  
pp. 592-623 ◽  
Author(s):  
H. L. A. van der Spek ◽  
S. Groot ◽  
E. M. Bakker ◽  
H. A. G. Wijshoff
Keyword(s):  
1982 ◽  
Vol 12 (4) ◽  
pp. 394-394
Author(s):  
Roger B. Dannenberg
Keyword(s):  

2016 ◽  
Vol 1 (1) ◽  
Author(s):  
Benjamin Schiller ◽  
Clemens Deusser ◽  
Jeronimo Castrillon ◽  
Thorsten Strufe

2021 ◽  
Author(s):  
◽  
David Friggens

<p>Concurrent data structure algorithms have traditionally been designed using locks to regulate the behaviour of interacting threads, thus restricting access to parts of the shared memory to only one thread at a time. Since locks can lead to issues of performance and scalability, there has been interest in designing so-called nonblocking algorithms that do not use locks. However, designing and reasoning about concurrent systems is difficult, and is even more so for nonblocking systems, as evidenced by the number of incorrect algorithms in the literature.  This thesis explores how the technique of model checking can aid the testing and verification of nonblocking data structure algorithms. Model checking is an automated verification method for finite state systems, and is able to produce counterexamples when verification fails. For verification, concurrent data structures are considered to be infinite state systems, as there is no bound on the number of interacting threads, the number of elements in the data structure, nor the number of possible distinct data values. Thus, in order to analyse concurrent data structures with model checking, we must either place finite bounds upon them, or employ an abstraction technique that will construct a finite system with the same properties. First, we discuss how nonblocking data structures can be best represented for model checking, and how to specify the properties we are interested in verifying. These properties are the safety property linearisability, and the progress properties wait-freedom, lock-freedom and obstructionfreedom. Second, we investigate using model checking for exhaustive testing, by verifying bounded (and hence finite state) instances of nonblocking data structures, parameterised by the number of threads, the number of distinct data values, and the size of storage memory (e.g. array length, or maximum number of linked list nodes). It is widely held, based on anecdotal evidence, that most bugs occur in small instances. We investigate the smallest bounds needed to falsify a number of incorrect algorithms, which supports this hypothesis. We also investigate verifying a number of correct algorithms for a range of bounds. If an algorithm can be verified for bounds significantly higher than the minimum bounds needed for falsification, then we argue it provides a high degree of confidence in the general correctness of the algorithm. However, with the available hardware we were not able to verify any of the algorithms to high enough bounds to claim such confidence.  Third, we investigate using model checking to verify nonblocking data structures by employing the technique of canonical abstraction to construct finite state representations of the unbounded algorithms. Canonical abstraction represents abstract states as 3-valued logical structures, and allows the initial coarse abstraction to be refined as necessary by adding derived predicates. We introduce several novel derived predicates and show how these allow linearisability to be verified for linked list based nonblocking stack and queue algorithms. This is achieved within the standard canonical abstraction framework, in contrast to recent approaches that have added extra abstraction techniques on top to achieve the same goal.  The finite state systems we construct using canonical abstraction are still relatively large, being exponential in the number of distinct abstract thread objects. We present an alternative application of canonical abstraction, which more coarsely collapses all threads in a state to be represented by a single abstract thread object. In addition, we define further novel derived predicates, and show that these allow linearisability to be verified for the same stack and queue algorithms far more efficiently.</p>


Semantic Web ◽  
2021 ◽  
pp. 1-36
Author(s):  
Enrico Daga ◽  
Albert Meroño-Peñuela ◽  
Enrico Motta

Sequences are among the most important data structures in computer science. In the Semantic Web, however, little attention has been given to Sequential Linked Data. In previous work, we have discussed the data models that Knowledge Graphs commonly use for representing sequences and showed how these models have an impact on query performance and that this impact is invariant to triplestore implementations. However, the specific list operations that the management of Sequential Linked Data requires beyond the simple retrieval of an entire list or a range of its elements – e.g. to add or remove elements from a list –, and their impact in the various list data models, remain unclear. Covering this knowledge gap would be a significant step towards the realization of a Semantic Web list Application Programming Interface (API) that standardizes list manipulation and generalizes beyond specific data models. In order to address these challenges towards the realization of such an API, we build on our previous work in understanding the effects of various sequential data models for Knowledge Graphs, extending our benchmark and proposing a set of read-write Semantic Web list operations in SPARQL, with insert, update and delete support. To do so, we identify five classic list-based computer science sequential data structures (linked list, double linked list, stack, queue, and array), from which we derive nine atomic read-write operations for Semantic Web lists. We propose a SPARQL implementation of these operations with five typical RDF data models and compare their performance by executing them against six increasing dataset sizes and four different triplestores. In light of our results, we discuss the feasibility of our devised API and reflect on the state of affairs of Sequential Linked Data.


2002 ◽  
Vol 12 (6) ◽  
pp. 567-600 ◽  
Author(s):  
KARL CRARY ◽  
STEPHANIE WEIRICH ◽  
GREG MORRISETT

Intensional polymorphism, the ability to dispatch to different routines based on types at run time, enables a variety of advanced implementation techniques for polymorphic languages, including tag-free garbage collection, unboxed function arguments, polymorphic marshalling and attened data structures. To date, languages that support intensional polymorphism have required a type-passing (as opposed to type-erasure) interpretation where types are constructed and passed to polymorphic functions at run time. Unfortunately, type-passing suffers from a number of drawbacks: it requires duplication of run-time constructs at the term and type levels, it prevents abstraction, and it severely complicates polymorphic closure conversion. We present a type-theoretic framework that supports intensional polymorphism, but avoids many of the disadvantages of type passing. In our approach, run-time type information is represented by ordinary terms. This avoids the duplication problem, allows us to recover abstraction, and avoids complications with closure conversion. In addition, our type system provides another improvement in expressiveness; it allows unknown types to be refined in place, thereby avoiding certain beta-expansions required by other frameworks.


2014 ◽  
Vol 668-669 ◽  
pp. 1198-1201
Author(s):  
Hong Mei Zhu ◽  
Liang Zhang ◽  
Wei Sun

In semantic Web, extensive reuse of existing large ontology is one of the central ideas of ontology engineering. Ontology extraction should return relative sub-ontology that covers some sub-vocabulary. The efficiency of the existing ontology extraction algorithm is relatively low when they try to get a suitable ontology module from ontology at run time. This paper proposed a kind of ontology module extraction method. Related concepts and criterions of ontology modules extraction are studied; data structures and identification and evaluation methods of ontology module extraction are discussed; preliminary experimental results and the corresponding analysis are also shown.


Author(s):  
Ranjit Biswas

The data structure “r-Train” (“Train” in short) where r is a natural number is a new kind of powerful robust data structure that can store homogeneous data dynamically in a flexible way, in particular for large amounts of data. But a train cannot store heterogeneous data (by the term heterogeneous data, the authors mean data of various datatypes). In fact, the classical data structures (e.g., array, linked list, etc.) can store and handle homogeneous data only, not heterogeneous data. The advanced data structure “r-Atrain” (“Atrain” in short) is logically almost analogous to the data structure r-train (train) but with an advanced level of construction to accommodate heterogeneous data of large volumes. The data structure train can be viewed as a special case of the data structure atrain. It is important to note that none of these two new data structures is a competitor of the other. By default, any heterogeneous data structure can work as a homogeneous data structure too. However, for working with a huge volume of homogeneous data, train is more suitable than atrain. For working with heterogeneous data, atrain is suitable while train cannot be applicable. The natural number r is suitably predecided and fixed by the programmer depending upon the problem under consideration and also upon the organization/industry for which the problem is posed.


Sign in / Sign up

Export Citation Format

Share Document