scholarly journals IMPLEMENTING A DOMAIN MODEL FOR DATA STRUCTURES

Author(s):  
DON BATORY ◽  
VIVEK SINGHAL ◽  
MARTY SIRKIN

We present a model of the data structure domain that is expressed in terms of the GenVoca domain modeling concepts [7]. We show how familiar data structures can be encapsulated as realms of plug-compatible, symmetric, and reusable components, and we show how complex data structures can be formed from their composition. The target application of our research is a precompiler for specifying and generating customized data structures.

2007 ◽  
Vol 10 (1) ◽  
Author(s):  
Jorge Villalobos ◽  
Danilo Pérez ◽  
Juan Castro ◽  
Camilo Jiménez

In a computer science curriculum, the data structures course is considered fundamental. In that course, students must generate the ability to desingn the more suitable data structures for a problem solution. They must also write an efficient algorithm in order to solve the problem. Students must understand that there are different types of data structures, each of them with associated algorithms of different complexity. A data structures laboratory is a set of computional tools that helps students in the experimentation with the concepts introduced in the curse. The main objetive of this experimentation is to generate the student's needed abilities for manipulating complex data structure. This paper presents the main characteristics of the laboratory built as a sopport of the course. we illustrate the huge possibilities of the tool with an example.


2020 ◽  
Author(s):  
Anil Kumar Bheemaiah

The Wiki Story an essential documentation as javadoc is added to GS collections as a mutable data structure, added to collections like the Bag, leading to self modifiable programs with attribute oriented programming inspired by work in LISP and AIML, and the Self language of BotLibre. In this paper we integrate Open FaaS to Java 8 and RxJava, for Green Coding and the generalization to reusable components in remote functions on the edge or the cloud.Keywords: GS collections, Eclipse collections, XDoclet, Code Generators, Open FaaS, Bayou Framework.What:The GS Collections has a new data structure candidate, the Wiki Story WS, WS inherits from the User Stories of the Agile process, with documentation embedded in it as comments, amenable directly to OOPS programming. A WS data structure is defined by the this property and reflects a uniform xml, JSON and html5 DOM structure. X Doclet is introduced as attribute oriented programming with a set of attributes , both for Beans, Streams, and Rx Operators and data structures. How:Attributes define Rx and Rx ++ programming with code generators from XDoclet 2 library.Custom objects allow for the integration of Rx Stream objects, both sensor streams, event streams, kinesis streams and dynamoDB streams and many more streams. OpenFaaS is also integrated by a query based function integration as remote method or cloud based method integration with attributes, called green coding similar to the method, queryCodeGenerator()(Bheemaiah, n.d.)Bayou but extended to FaaS, services as attributes.(“How to Use Bayou – Bayou: Program Synthesis Powered by Bayesian Machine Learning” n.d.)Why:We have added wiki’s to the user stories provided as agile, attributes are added allowing for a query based tool for FaaS and XDoclet based code generation analogous to the neural sketch learning of Bayou. Code generation as amplification is now so fashionable that gangster like coders can also contribute really well generated code, an evolution of compiler backend code.Applications:Uniform high quality code, optimized to score high on Sonar, Code Generators for amplification and Green Coding.


2021 ◽  
Vol 73 (1) ◽  
pp. 134-141
Author(s):  
A.R. Baidalina ◽  
◽  
S.A. Boranbayev ◽  

The article discusses ways of programming algorithms for complex data structures in Python. Knowledge of these structures and the corresponding algorithms is necessary when choosing the best methods for developing various software. When studying the subject "Algorithms and Data Structures", it is important to understand the essence of data structures. This is due to the fact that manipulating a data structure to fit a specific problem requires an understanding of the essence and algorithms of this data structure. Examples of programming algorithms related to dynamic lists and binary search trees in the currently widely used Python language are given. The algorithms for traversing the graph in depth and breadth are optimally and clearly implemented using the Python dictionary.


2021 ◽  
Vol 13 (4) ◽  
pp. 559
Author(s):  
Milto Miltiadou ◽  
Neill D. F. Campbell ◽  
Darren Cosker ◽  
Michael G. Grant

In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


Author(s):  
Sudeep Sarkar ◽  
Dmitry Goldgof

There is a growing need for expertise both in image analysis and in software engineering. To date, these two areas have been taught separately in an undergraduate computer and information science curriculum. However, we have found that introduction to image analysis can be easily integrated in data-structure courses without detracting from the original goal of teaching data structures. Some of the image processing tasks offer a natural way to introduce basic data structures such as arrays, queues, stacks, trees and hash tables. Not only does this integrated strategy expose the students to image related manipulations at an early stage of the curriculum but it also imparts cohesiveness to the data-structure assignments and brings them closer to real life. In this paper we present a set of programming assignments that integrates undergraduate data-structure education with image processing tasks. These assignments can be incorporated in existing data-structure courses with low time and software overheads. We have used these assignment sets thrice: once in a 10-week duration data-structure course at the University of California, Santa Barbara and the other two times in 15-week duration courses at the University of South Florida, Tampa.


Algorithms ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 128 ◽  
Author(s):  
Shuhei Denzumi ◽  
Jun Kawahara ◽  
Koji Tsuda ◽  
Hiroki Arimura ◽  
Shin-ichi Minato ◽  
...  

In this article, we propose a succinct data structure of zero-suppressed binary decision diagrams (ZDDs). A ZDD represents sets of combinations efficiently and we can perform various set operations on the ZDD without explicitly extracting combinations. Thanks to these features, ZDDs have been applied to web information retrieval, information integration, and data mining. However, to support rich manipulation of sets of combinations and update ZDDs in the future, ZDDs need too much space, which means that there is still room to be compressed. The paper introduces a new succinct data structure, called DenseZDD, for further compressing a ZDD when we do not need to conduct set operations on the ZDD but want to examine whether a given set is included in the family represented by the ZDD, and count the number of elements in the family. We also propose a hybrid method, which combines DenseZDDs with ordinary ZDDs. By numerical experiments, we show that the sizes of our data structures are three times smaller than those of ordinary ZDDs, and membership operations and random sampling on DenseZDDs are about ten times and three times faster than those on ordinary ZDDs for some datasets, respectively.


Sign in / Sign up

Export Citation Format

Share Document