Solving the Kinematics of the Planar Mechanism Using Data Structures of Assur Groups

2016 ◽  
Vol 8 (6) ◽  
Author(s):  
Yuanxi Sun ◽  
Wenjie Ge ◽  
Jia Zheng ◽  
Dianbiao Dong

This paper presents a systematic solution of the kinematics of the planar mechanism from the aspect of Assur groups. When the planar mechanism is decomposed into Assur groups, the detailed calculating order of Assur groups is unknown. To solve this problem, first, the decomposed Assur groups are classified into three types according to their calculability, which lays the foundation for the establishment of the automatic solving algorithm for decomposed Assur groups. Second, the data structure for the Assur group is presented, which enables the automatic solving algorithm with the input and output parameters of each Assur group. All decomposed Assur groups are stored in the component stack, and all parameters of which are stored in the parameter stacks. The automatic algorithm will detect identification flags of each Assur group in the component stack and their corresponding parameters in the parameter stacks in order to decide which Assur group is calculable and which one can be solved afterward. The proposed systematic solution is able to generate an automatic solving order for all Assur groups in the planar mechanism and allows the adding, modifying, and removing of Assur groups at any time. Two planar mechanisms are given as examples to show the detailed process of the proposed systematic solution.

2019 ◽  
Vol 2 ◽  
pp. 1-10
Author(s):  
Menelaos Kotsollaris ◽  
William Liu ◽  
Emmanuel Stefanakis ◽  
Yun Zhang

<p><strong>Abstract.</strong> Modern map visualizations are built using data structures for storing tile images, while their main concerns are to maximize efficiency and usability. The core functionality of a web tiled map management system is to provide tile images to the end user; several tiles combined construe the web map. To achieve this, several data structures are showcased and analyzed. Specifically, this paper focuses on the SimpleFormat, which stores the tiles directly on the file system; the ImageBlock, which divides each tile folder (a folder where the tile images are stored) into subfolders that contain multiple tiles prior to storing the tiles on the file system; the LevelFilesSet, a data structure that creates dedicated Random-Access files, wherein the tile dataset is first stored and then parsed in files to retrieve the tile images; and, finally, the LevelFilesBlock, a hybrid data structure which combines ImageBlock and LevelFilesSet data structures. This work signifies the first time this hybrid approach has been implemented and applied in a web tiled map context. The JDBC API was used for integrating with the PostgreSQL database. This database was then used to conduct cross-testing amongst the data structures. Subsequently, several benchmark tests on local and cloud environments are developed anew and assessed under different system configurations to compare the data structures and provide a thorough analysis of their efficiency. These benchmarks showcased the efficiency of LevelFilesSet, which retrieved tiles up to 3.3 times faster than the other data structures. Peripheral features and principles of implementing scalable web tiled map management systems among different software architectures and system configurations are analyzed and discussed.</p>


2021 ◽  
Vol 13 (4) ◽  
pp. 559
Author(s):  
Milto Miltiadou ◽  
Neill D. F. Campbell ◽  
Darren Cosker ◽  
Michael G. Grant

In this paper, we investigate the performance of six data structures for managing voxelised full-waveform airborne LiDAR data during 3D polygonal model creation. While full-waveform LiDAR data has been available for over a decade, extraction of peak points is the most widely used approach of interpreting them. The increased information stored within the waveform data makes interpretation and handling difficult. It is, therefore, important to research which data structures are more appropriate for storing and interpreting the data. In this paper, we investigate the performance of six data structures while voxelising and interpreting full-waveform LiDAR data for 3D polygonal model creation. The data structures are tested in terms of time efficiency and memory consumption during run-time and are the following: (1) 1D-Array that guarantees coherent memory allocation, (2) Voxel Hashing, which uses a hash table for storing the intensity values (3) Octree (4) Integral Volumes that allows finding the sum of any cuboid area in constant time, (5) Octree Max/Min, which is an upgraded octree and (6) Integral Octree, which is proposed here and it is an attempt to combine the benefits of octrees and Integral Volumes. In this paper, it is shown that Integral Volumes is the more time efficient data structure but it requires the most memory allocation. Furthermore, 1D-Array and Integral Volumes require the allocation of coherent space in memory including the empty voxels, while Voxel Hashing and the octree related data structures do not require to allocate memory for empty voxels. These data structures, therefore, and as shown in the test conducted, allocate less memory. To sum up, there is a need to investigate how the LiDAR data are stored in memory. Each tested data structure has different benefits and downsides; therefore, each application should be examined individually.


2018 ◽  
Vol 18 (3-4) ◽  
pp. 470-483 ◽  
Author(s):  
GREGORY J. DUCK ◽  
JOXAN JAFFAR ◽  
ROLAND H. C. YAP

AbstractMalformed data-structures can lead to runtime errors such as arbitrary memory access or corruption. Despite this, reasoning over data-structure properties for low-level heap manipulating programs remains challenging. In this paper we present a constraint-based program analysis that checks data-structure integrity, w.r.t. given target data-structure properties, as the heap is manipulated by the program. Our approach is to automatically generate a solver for properties using the type definitions from the target program. The generated solver is implemented using a Constraint Handling Rules (CHR) extension of built-in heap, integer and equality solvers. A key property of our program analysis is that the target data-structure properties are shape neutral, i.e., the analysis does not check for properties relating to a given data-structure graph shape, such as doubly-linked-lists versus trees. Nevertheless, the analysis can detect errors in a wide range of data-structure manipulating programs, including those that use lists, trees, DAGs, graphs, etc. We present an implementation that uses the Satisfiability Modulo Constraint Handling Rules (SMCHR) system. Experimental results show that our approach works well for real-world C programs.


Author(s):  
Sudeep Sarkar ◽  
Dmitry Goldgof

There is a growing need for expertise both in image analysis and in software engineering. To date, these two areas have been taught separately in an undergraduate computer and information science curriculum. However, we have found that introduction to image analysis can be easily integrated in data-structure courses without detracting from the original goal of teaching data structures. Some of the image processing tasks offer a natural way to introduce basic data structures such as arrays, queues, stacks, trees and hash tables. Not only does this integrated strategy expose the students to image related manipulations at an early stage of the curriculum but it also imparts cohesiveness to the data-structure assignments and brings them closer to real life. In this paper we present a set of programming assignments that integrates undergraduate data-structure education with image processing tasks. These assignments can be incorporated in existing data-structure courses with low time and software overheads. We have used these assignment sets thrice: once in a 10-week duration data-structure course at the University of California, Santa Barbara and the other two times in 15-week duration courses at the University of South Florida, Tampa.


Algorithms ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 128 ◽  
Author(s):  
Shuhei Denzumi ◽  
Jun Kawahara ◽  
Koji Tsuda ◽  
Hiroki Arimura ◽  
Shin-ichi Minato ◽  
...  

In this article, we propose a succinct data structure of zero-suppressed binary decision diagrams (ZDDs). A ZDD represents sets of combinations efficiently and we can perform various set operations on the ZDD without explicitly extracting combinations. Thanks to these features, ZDDs have been applied to web information retrieval, information integration, and data mining. However, to support rich manipulation of sets of combinations and update ZDDs in the future, ZDDs need too much space, which means that there is still room to be compressed. The paper introduces a new succinct data structure, called DenseZDD, for further compressing a ZDD when we do not need to conduct set operations on the ZDD but want to examine whether a given set is included in the family represented by the ZDD, and count the number of elements in the family. We also propose a hybrid method, which combines DenseZDDs with ordinary ZDDs. By numerical experiments, we show that the sizes of our data structures are three times smaller than those of ordinary ZDDs, and membership operations and random sampling on DenseZDDs are about ten times and three times faster than those on ordinary ZDDs for some datasets, respectively.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Inanç Birol ◽  
Justin Chu ◽  
Hamid Mohamadi ◽  
Shaun D. Jackman ◽  
Karthika Raghavan ◽  
...  

De novoassembly of the genome of a species is essential in the absence of a reference genome sequence. Many scalable assembly algorithms use the de Bruijn graph (DBG) paradigm to reconstruct genomes, where a table of subsequences of a certain length is derived from the reads, and their overlaps are analyzed to assemble sequences. Despite longer subsequences unlocking longer genomic features for assembly, associated increase in compute resources limits the practicability of DBG over other assembly archetypes already designed for longer reads. Here, we revisit the DBG paradigm to adapt it to the changing sequencing technology landscape and introduce three data structure designs for spaced seeds in the form of paired subsequences. These data structures address memory and run time constraints imposed by longer reads. We observe that when a fixed distance separates seed pairs, it provides increased sequence specificity with increased gap length. Further, we note that Bloom filters would be suitable to implicitly store spaced seeds and be tolerant to sequencing errors. Building on this concept, we describe a data structure for tracking the frequencies of observed spaced seeds. These data structure designs will have applications in genome, transcriptome and metagenome assemblies, and read error correction.


2019 ◽  
Vol 2 (2) ◽  
pp. 82-89
Author(s):  
Nor Tasik Misbahrudin

Waqf is a voluntary charity that cannot be disposed of and the ownership cannot be transferred once it is declared as waqf assets. Waqf institutions play an important role in helping the development of Muslims ummah through wealth distribution. State Islamic Religious Councils (SIRCs) in Malaysia are the sole trustee that manage and develop waqf assets. Based on selected input and output, the intermediary approach assumes that cash waqf received as output while total expenditure of SIRCs as input. Under this approach SIRCs act as intermediary between waqif (giver) and beneficiaries. Thus, this paper attempts to analyze the efficiency of waqf institutions in Malaysia by using Data Envelopment Analysis (DEA) method under output-orientation using Variable Return to Scale (VRS) assumptions. Four SIRCs were selected as decision making units (DMU) for the period of 2011 to 2015. The result indicates that changes in average technical efficiency for every year is contributed by both pure technical and scale. However, inefficiency of Malaysian waqf institutions is mostly contributed by pure technical efficiency aspects rather than scale. 2012 showed the highest average technical efficiency with 73.9% as most of the institutions operated in optimum level of input to produce output. Thus, the result suggests that both technical and scale efficiency should be improved to achieve the most efficient and productive level of performance in order to fulfill objectives of the institutions as an intermediary between waqif and beneficiaries.


2021 ◽  
Vol 4 (1) ◽  
pp. 9-14
Author(s):  
Abdujabbor Abidov ◽  

This article is devoted to the development of a model for determining the standard of living of the population. The problems of using data warehouses, communication models of e-government that form the basis of digital platforms, big data, issues of the digital economy, the choice of data structures, methods of formal modeling of relationships are also considered.As a result, a model was developed using the poverty criteria set out in the Poverty Measurement Toolkit when determining the international poverty line.


Author(s):  
Satya Swesty Widiyana ◽  
Rus Indiyanto

ABSTRACTThis study was taken from the problems in Heaven Store ranging from turnover does not reach the target, the different display products for each branch, and a just few reference customer visiting from problems in customer satisfaction. because the values of input and output obtained from each branch has a different values so demanding customers Heaven Store to correct weaknesses in the efficiency of customer service and satisfaction, then we tried to respond to the challenges of these improvements to the study "Analysis of Measurement Efficiency Services Methods Data envelopment analysis (DEA) In Heaven Store in West Surabaya "So in this study, researchers will assist the managementHeaven Store for measuring the level of efficiency that Heaven store along 5th branches can improve the quality of service by using data envelopment analysis (DEA), which is a methods that determine the level of efficiency similar organization where efficiency is not determined by the organization concerned. It is hoped this analysis will help the management to withdraw the customer so that the customer can buy the products that are sold in Heaven Store. After calculation of the mathematical model by referring to the calculation of the mathematical model DEA CRS, obtained the efficiency 0.8479688 on the fifth branch Heaven Store, then after an improvement in input and output according to the reference fixes the target model of DEA CRS, then the value of the relative efficiency DMU 5 can be increased from 0.8479688 (inefficient) to 1.000000 (efficient). Keywords: Data Envelopment Analysis, customer satisfaction, efficiency


Sign in / Sign up

Export Citation Format

Share Document