scholarly journals Bloom Filter-Based Parallel Architecture for Accelerating Equi-Join Operation on FPGA

Electronics ◽  
2021 ◽  
Vol 10 (15) ◽  
pp. 1778
Author(s):  
Binhao He ◽  
Meiting Xue ◽  
Shubiao Liu ◽  
Wei Luo

As one of the most important operations in relational databases, the join is data-intensive and time-consuming. Thus, offloading this operation using field-programmable gate arrays (FPGAs) has attracted much interest and has been broadly researched in recent years. However, the available SRAM-based join architectures are often resource-intensive, power-consuming, or low-throughput. Besides, a lower match rate does not lead to a shorter operation time. To address these issues, a Bloom filter (BF)-based parallel join architecture is presented in this paper. This architecture first leverages the BF to discard the tuples that are not in the join result and classifies the remaining tuples into different channels. Second, a binary search tree is used to reduce the number of comparisons. The proposed method was implemented on a Xilinx FPGA, and the experimental results show that under a match rate of 50%, our architecture achieved a high join throughput of 145.8 million tuples per second and a maximum acceleration factor of 2.3 compared to the existing SRAM-based join architectures.

2013 ◽  
Vol 7 (4) ◽  
pp. 11-21
Author(s):  
K. Saravanan ◽  
A. Senthilkumar

In this article, the authors present an investigation on bloom filters and introduce a new improved variant, which uses a secure modified hash function and suggested improved mapping scheme with an efficient parallel architecture. This novel architecture provides efficient, relatively fast membership querying and compact information representation with negligible false positive. This is relatively a low power and secure design with very less false positive ratio when compared with the traditional bloom filters. The design has been evaluated and tested using Xilinx 65 nm Virtex-5 field-programmable gate array as the target technology. The performance matrices are false positive ratio, power, speed and compactness.


Author(s):  
Samina Saghir ◽  
Tasleem Mustafa

<p>Increase in globalization of the industry of software requires an exploration of requirements engineering (RE) in software development institutes at multiple locations. Requirements engineering task is very complicated when it is performed at single site, but it becomes too much complex when stakeholder groups define well-designed requirements under language, time zone and cultural limits. Requirements prioritization (RP) is considered as an imperative part of software requirements engineering in which requirements are ranked to develop best-quality software. In this research, a comparative study of the requirements prioritization techniques was done to overcome the challenges initiated by the corporal distribution of stakeholders within the organization at multiple locations. The objective of this study was to make a comparison between five techniques for prioritizing software requirements and to discuss the results for global software engineering. The selected techniques were Analytic Hierarchy Process (AHP), Cumulative Voting (CV), Value Oriented Prioritization (VOP), Binary Search Tree (BST), and Numerical Assignment Technique (NAT). At the end of the research a framework for Global Software Engineering (GSE) was proposed to prioritize the requirements for stakeholders at distributed locations.<strong></strong></p>


Cryptography ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 4
Author(s):  
Bayan Alabdullah ◽  
Natalia Beloff ◽  
Martin White

Data security has become crucial to most enterprise and government applications due to the increasing amount of data generated, collected, and analyzed. Many algorithms have been developed to secure data storage and transmission. However, most existing solutions require multi-round functions to prevent differential and linear attacks. This results in longer execution times and greater memory consumption, which are not suitable for large datasets or delay-sensitive systems. To address these issues, this work proposes a novel algorithm that uses, on one hand, the reflection property of a balanced binary search tree data structure to minimize the overhead, and on the other hand, a dynamic offset to achieve a high security level. The performance and security of the proposed algorithm were compared to Advanced Encryption Standard and Data Encryption Standard symmetric encryption algorithms. The proposed algorithm achieved the lowest running time with comparable memory usage and satisfied the avalanche effect criterion with 50.1%. Furthermore, the randomness of the dynamic offset passed a series of National Institute of Standards and Technology (NIST) statistical tests.


2021 ◽  
Author(s):  
ZEGOUR Djamel Eddine

Abstract Today, Red-Black trees are becoming a popular data structure typically used to implement dictionaries, associative arrays, symbol tables within some compilers (C++, Java …) and many other systems. In this paper, we present an improvement of the delete algorithm of this kind of binary search tree. The proposed algorithm is very promising since it colors differently the tree while reducing color changes by a factor of about 29%. Moreover, the maintenance operations re-establishing Red-Black tree balance properties are reduced by a factor of about 11%. As a consequence, the proposed algorithm saves about 4% on running time when insert and delete operations are used together while conserving search performance of the standard algorithm.


2018 ◽  
Vol 7 (2.4) ◽  
pp. 46 ◽  
Author(s):  
Shubhanshi Singhal ◽  
Akanksha Kaushik ◽  
Pooja Sharma

Due to drastic growth of digital data, data deduplication has become a standard component of modern backup systems. It reduces data redundancy, saves storage space, and simplifies the management of data chunks. This process is performed in three steps: chunking, fingerprinting, and indexing of fingerprints. In chunking, data files are divided into the chunks and the chunk boundary is decided by the value of the divisor. For each chunk, a unique identifying value is computed using a hash signature (i.e. MD-5, SHA-1, SHA-256), known as fingerprint. At last, these fingerprints are stored in the index to detect redundant chunks means chunks having the same fingerprint values. In chunking, the chunk size is an important factor that should be optimal for better performance of deduplication system. Genetic algorithm (GA) is gaining much popularity and can be applied to find the best value of the divisor. Secondly, indexing also enhances the performance of the system by reducing the search time. Binary search tree (BST) based indexing has the time complexity of  which is minimum among the searching algorithm. A new model is proposed by associating GA to find the value of the divisor. It is the first attempt when GA is applied in the field of data deduplication. The second improvement in the proposed system is that BST index tree is applied to index the fingerprints. The performance of the proposed system is evaluated on VMDK, Linux, and Quanto datasets and a good improvement is achieved in deduplication ratio.


Author(s):  
Chengwen Chris Wang ◽  
Daniel Sleator

Sign in / Sign up

Export Citation Format

Share Document