PERFORMANCE APPRAISAL OF TREAP AND HEAP SORT ALGORITHMS

2020 ◽  
Vol 18 (1) ◽  
pp. 1-10
Author(s):  
A. D. GBADEBO ◽  
A. T. AKINWALE ◽  
S. AKINLEYE

The task of storing items to allow for fast access to an item given its key is an ubiquitous problem in many organizations. Treap as a method uses key and priority for searching in databases. When the keys are drawn from a large totally ordered set, the choice of storing the items is usually some sort of search tree. The simplest form of such tree is a binary search tree. In this tree, a set X of n items is stored at the nodes of a rooted binary tree in which some item y ϵ X is chosen to be stored at the root of the tree. Heap as data structure is an array object that can be viewed as a nearly complete binary tree in which each node of the tree corresponds to an element of the array that stores the value in the node. Both algorithms were subjected to sorting under the same experimental environment and conditions. This was implemented by means of threads which call each of the two methods simultaneously. The server keeps records of individual search time which was the basis of the comparison. It was discovered that treap was faster than heap sort in sorting and searching for elements using systems with homogenous properties.    

2018 ◽  
Vol 7 (2.4) ◽  
pp. 46 ◽  
Author(s):  
Shubhanshi Singhal ◽  
Akanksha Kaushik ◽  
Pooja Sharma

Due to drastic growth of digital data, data deduplication has become a standard component of modern backup systems. It reduces data redundancy, saves storage space, and simplifies the management of data chunks. This process is performed in three steps: chunking, fingerprinting, and indexing of fingerprints. In chunking, data files are divided into the chunks and the chunk boundary is decided by the value of the divisor. For each chunk, a unique identifying value is computed using a hash signature (i.e. MD-5, SHA-1, SHA-256), known as fingerprint. At last, these fingerprints are stored in the index to detect redundant chunks means chunks having the same fingerprint values. In chunking, the chunk size is an important factor that should be optimal for better performance of deduplication system. Genetic algorithm (GA) is gaining much popularity and can be applied to find the best value of the divisor. Secondly, indexing also enhances the performance of the system by reducing the search time. Binary search tree (BST) based indexing has the time complexity of  which is minimum among the searching algorithm. A new model is proposed by associating GA to find the value of the divisor. It is the first attempt when GA is applied in the field of data deduplication. The second improvement in the proposed system is that BST index tree is applied to index the fingerprints. The performance of the proposed system is evaluated on VMDK, Linux, and Quanto datasets and a good improvement is achieved in deduplication ratio.


1991 ◽  
Vol 34 (1) ◽  
pp. 23-30 ◽  
Author(s):  
Peter Arpin ◽  
John Ginsburg

AbstractA partially ordered set P is said to have the n-cutset property if for every element x of P, there is a subset S of P all of whose elements are noncomparable to x, with |S| ≤ n, and such that every maximal chain in P meets {x} ∪ S. It is known that if P has the n-cutset property then P has at most 2n maximal elements. Here we are concerned with the extremal case. We let Max P denote the set of maximal elements of P. We establish the following result. THEOREM: Let n be a positive integer. Suppose P has the n-cutset property and that |Max P| = 2n. Then P contains a complete binary tree T of height n with Max T = Max P and such that C ∩ T is a maximal chain in T for every maximal chain C of P. Two examples are given to show that this result does not extend to the case when n is infinite. However the following is shown. THEOREM: Suppose that P has the ω-cutset property and that |Max P| = 2ω. If P — Max P is countable then P contains a complete binary tree of height ω


2019 ◽  
Vol 11 (1) ◽  
pp. 49-70
Author(s):  
Mohsin Altaf Wani ◽  
Manzoor Ahmad

Modern GPUs perform computation at a very high rate when compared to CPUs; as a result, they are increasingly used for general purpose parallel computation. Determining if a statically optimal binary search tree is an optimization problem to find the optimal arrangement of nodes in a binary search tree so that average search time is minimized. Knuth's modification to the dynamic programming algorithm improves the time complexity to O(n2). We develop a multiple GPU-based implementation of this algorithm using different approaches. Using suitable GPU implementation for a given workload provides a speedup of up to four times over other GPU based implementations. We are able to achieve a speedup factor of 409 on older GTX 570 and a speedup factor of 745 is achieved on a more modern GTX 1060 when compared to a conventional single threaded CPU based implementation.


2021 ◽  
Vol 2 (1) ◽  
pp. 19-23
Author(s):  
Agus Muliadi ◽  
Khairul Muttaqin

Multilevel Marketing (MLM) business visualizes a business network into a hierarchical tree consisting of relationships between nodes. In its application, a branch is generated from each node which when visualized becomes a tree with a height that continues to increase and will consume space which will have an impact in terms of less than optimal data access. In order for the tree structure to be formed from relationships between nodes with hierarchically ordered data, and to streamline time in the search and data access process, an MLM business process information system was built by applying the Binary Tree algorithm. The concept in the Binary tree itself is a hierarchical organization of several nodes, where each node has no more than 2. Nodes that are under a node are called children of the node. A node above a node is called the parent of that node. A binary search tree is a search in a binary tree by tracing nodes that are placed in the left branch if the information content sent is smaller than the information at the root, or tracing the right branch if the information content is stated to be greater than the root. The goal is to implement sorted data into the information system in accordance with the hierarchy of the MLM system and in this way data retrieval can be done quickly and easily.


Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1186
Author(s):  
Fahed Jubair ◽  
Mohammed Hawa

Pathfinding is the problem of finding the shortest path between a pair of nodes in a graph. In the context of uniform-cost undirected grid maps, heuristic search algorithms, such as A ★ and weighted A ★ ( W A ★ ), have been dominantly used for pathfinding. However, the lack of knowledge about obstacle shapes in a gird map often leads heuristic search algorithms to unnecessarily explore areas where a viable path is not available. We refer to such areas in a grid map as blocked areas (BAs). This paper introduces a preprocessing algorithm that analyzes the geometry of obstacles in a grid map and stores knowledge about blocked areas in a memory-efficient balanced binary search tree data structure. During actual pathfinding, a search algorithm accesses the binary search tree to identify blocked areas in a grid map and therefore avoid exploring them. As a result, the search time is significantly reduced. The scope of the paper covers maps in which obstacles are represented as horizontal and vertical line-segments. The impact of using the blocked area knowledge during pathfinding in A ★ and W A ★ is evaluated using publicly available benchmark set, consisting of sixty grid maps of mazes and rooms. In mazes, the search time for both A ★ and W A ★ is reduced by 28 % , on average. In rooms, the search time for both A ★ and W A ★ is reduced by 30 % , on average. This is achieved while preserving the search optimality of A ★ and the search sub-optimality of W A ★ .


2000 ◽  
Vol 11 (03) ◽  
pp. 485-513 ◽  
Author(s):  
SEONGHUN CHO ◽  
SARTAJ SAHNI

We develop a new class of weight balanced binary search trees called β-balanced binary search trees (β-BBSTs). β-BBSTs are designed to have reduced internal path length. As a result, they are expected to exhibit good search time characteristics. Individual search, insert, and delete operations in an n node β-BBST take O( log n) time for [Formula: see text]. Experimental results comparing the performance of β-BBSTs, WB(α) trees, AVL-trees, red/black trees, treaps, deterministic skip lists and skip lists are presented. Two simplified versions of, β-BBSTs are also developed.


Author(s):  
Samina Saghir ◽  
Tasleem Mustafa

<p>Increase in globalization of the industry of software requires an exploration of requirements engineering (RE) in software development institutes at multiple locations. Requirements engineering task is very complicated when it is performed at single site, but it becomes too much complex when stakeholder groups define well-designed requirements under language, time zone and cultural limits. Requirements prioritization (RP) is considered as an imperative part of software requirements engineering in which requirements are ranked to develop best-quality software. In this research, a comparative study of the requirements prioritization techniques was done to overcome the challenges initiated by the corporal distribution of stakeholders within the organization at multiple locations. The objective of this study was to make a comparison between five techniques for prioritizing software requirements and to discuss the results for global software engineering. The selected techniques were Analytic Hierarchy Process (AHP), Cumulative Voting (CV), Value Oriented Prioritization (VOP), Binary Search Tree (BST), and Numerical Assignment Technique (NAT). At the end of the research a framework for Global Software Engineering (GSE) was proposed to prioritize the requirements for stakeholders at distributed locations.<strong></strong></p>


Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1778
Author(s):  
Binhao He ◽  
Meiting Xue ◽  
Shubiao Liu ◽  
Wei Luo

As one of the most important operations in relational databases, the join is data-intensive and time-consuming. Thus, offloading this operation using field-programmable gate arrays (FPGAs) has attracted much interest and has been broadly researched in recent years. However, the available SRAM-based join architectures are often resource-intensive, power-consuming, or low-throughput. Besides, a lower match rate does not lead to a shorter operation time. To address these issues, a Bloom filter (BF)-based parallel join architecture is presented in this paper. This architecture first leverages the BF to discard the tuples that are not in the join result and classifies the remaining tuples into different channels. Second, a binary search tree is used to reduce the number of comparisons. The proposed method was implemented on a Xilinx FPGA, and the experimental results show that under a match rate of 50%, our architecture achieved a high join throughput of 145.8 million tuples per second and a maximum acceleration factor of 2.3 compared to the existing SRAM-based join architectures.


Cryptography ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 4
Author(s):  
Bayan Alabdullah ◽  
Natalia Beloff ◽  
Martin White

Data security has become crucial to most enterprise and government applications due to the increasing amount of data generated, collected, and analyzed. Many algorithms have been developed to secure data storage and transmission. However, most existing solutions require multi-round functions to prevent differential and linear attacks. This results in longer execution times and greater memory consumption, which are not suitable for large datasets or delay-sensitive systems. To address these issues, this work proposes a novel algorithm that uses, on one hand, the reflection property of a balanced binary search tree data structure to minimize the overhead, and on the other hand, a dynamic offset to achieve a high security level. The performance and security of the proposed algorithm were compared to Advanced Encryption Standard and Data Encryption Standard symmetric encryption algorithms. The proposed algorithm achieved the lowest running time with comparable memory usage and satisfied the avalanche effect criterion with 50.1%. Furthermore, the randomness of the dynamic offset passed a series of National Institute of Standards and Technology (NIST) statistical tests.


2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Yu Zhang ◽  
Yin Li ◽  
Yifan Wang

Searchable symmetric encryption that supports dynamic multikeyword ranked search (SSE-DMKRS) has been intensively studied during recent years. Such a scheme allows data users to dynamically update documents and retrieve the most wanted documents efficiently. Previous schemes suffer from high computational costs since the time and space complexities of these schemes are linear with the size of the dictionary generated from the dataset. In this paper, by utilizing a shallow neural network model called “Word2vec” together with a balanced binary tree structure, we propose a highly efficient SSE-DMKRS scheme. The “Word2vec” tool can effectively convert the documents and queries into a group of vectors whose dimensions are much smaller than the size of the dictionary. As a result, we can significantly reduce the related space and time cost. Moreover, with the use of the tree-based index, our scheme can achieve a sublinear search time and support dynamic operations like insertion and deletion. Both theoretical and experimental analyses demonstrate that the efficiency of our scheme surpasses any other schemes of the same kind, so that it has a wide application prospect in the real world.


Sign in / Sign up

Export Citation Format

Share Document