FMPC: A Fast Multi-Dimensional Packet Classification Algorithm

2014 ◽  
Vol 644-650 ◽  
pp. 3365-3370
Author(s):  
Zhen Hong Guo ◽  
Lin Li ◽  
Qing Wang ◽  
Meng Lin ◽  
Rui Pan

With the rapid development of the Internet, the number of firewall rules is increasing. The enormous quantity of rules challenges the performance of the packet classification that has already become a bottleneck in firewalls. This dissertation proposes a rapid and multi-dimensional algorithm for packet classification based on BSOL(Binary Search On Leaves), which is named FMPC(FastMulti-dimensional Packet Classification). Different from BSOL, FMPC cuts all dimensions at the same time to decompose rule spaces and stores leaf spaces into hash tables; FMPC constructs a Bloom Filter for every hash table and stores them into embedded SRAM. When classifying a packet, FMPC performs parallel queries on Bloom Filters and determines how to visit hash tables according to the results. Algorithm analysis and the result of simulations show: the average number of hash-table lookups of FMPC is 1 when classifying a packet, which is much smaller than that of BSOL; inthe worst case, the number of hash-table lookups of FMPCisO(logwmax+1⁡), which is also smaller than that of BSOL in multi-dimensional environment, where wmax is the length, in bits, of the dimension whose length is the longest..

2002 ◽  
Vol 03 (03n04) ◽  
pp. 105-128 ◽  
Author(s):  
KUN SUK KIM ◽  
SARTAJ SAHNI

Waldvogel et al.9 have proposed a collection of hash tables (CHT) organization for an IP router table. Each hash table in the CHT contains prefixes of the same length together with markers for longer-length prefixes. IP lookup can be done with O( log ldist) hash-table searches, where ldist is the number of distinct prefix-lengths (also equal to the number of hash tables in the CHT). Srinivasan and Varghese8 have proposed the use of controlled prefix-expansion to reduce the value of ldist. The details of their algorithm to reduce the number of lengths are given in [7]. The complexity of this algorithm is O(nW2), where n is the number of prefixes, and W is the length of the longest prefix. The algorithm of [7] does not minimize the storage required by the prefixes and markers for the resulting set of prefixes. We develop an algorithm that minimizes storage requirement but takes O(nW3 + kW4) time, where k is the desired number of distinct lengths. Also, we propose improvements to the heuristic of [7].


2011 ◽  
Vol 2011 ◽  
pp. 1-10
Author(s):  
Mahmood Ahmadi ◽  
Stephan Wong

Within packet processing systems, lengthy memory accesses greatly reduce performance. To overcome this limitation, network processors utilize many different techniques, for example, utilizing multilevel memory hierarchies, special hardware architectures, and hardware threading. In this paper, we introduce a multilevel memory architecture for counting Bloom filters. Based on the probabilities of incrementing of the counters in the counting Bloom filter, a multi-level cache architecture called the cached counting Bloom filter (CCBF) is presented, where each cache level stores the items with the same counters. To test the CCBF architecture, we implement a software packet classifier that utilizes basic tuple space search using a 3-level CCBF. The results of mathematical analysis and implementation of the CCBF for packet classification show that the proposed cache architecture decreases the number of memory accesses when compared to a standard Bloom filter. Based on the mathematical analysis of CCBF, the number of accesses is decreased by at least 53%. The implementation results of the software packet classifier are at most 7.8% (3.5% in average) less than corresponding mathematical analysis results. This difference is due to some parameters in the packet classification application such as number of tuples, distribution of rules through the tuples, and utilized hashing functions.


2020 ◽  
Vol 10 (15) ◽  
pp. 5218 ◽  
Author(s):  
Hayoung Byun ◽  
Hyesook Lim

Hash-based data structures have been widely used in many applications. An intrinsic problem of hashing is collision, in which two or more elements are hashed to the same value. If a hash table is heavily loaded, more collisions would occur. Elements that could not be stored in a hash table because of the collision cause search failures. Many variant structures have been studied to reduce the number of collisions, but none of the structures completely solves the collision problem. In this paper, we claim that a functional Bloom filter (FBF) provides a lower search failure rate than hash tables, when a hash table is heavily loaded. In other words, a hash table can be replaced with an FBF because the FBF is more effective than hash tables in the search failure rate in storing a large amount of data to a limited size of memory. While hash tables require to store each input key in addition to its return value, a functional Bloom filter stores return values without input keys, because different index combinations according to each input key can be used to identify the input key. In search failure rates, we theoretically compare the FBF with hash-based data structures, such as multi-hash table, cuckoo hash table, and d-left hash table. We also provide simulation results to prove the validity of our theoretical results. The simulation results show that the search failure rates of hash tables are larger than that of the functional Bloom filter when the load factor is larger than 0.6.


2018 ◽  
Author(s):  
Justin Chu ◽  
Hamid Mohamadi ◽  
Emre Erhan ◽  
Jeffery Tse ◽  
Readman Chiu ◽  
...  

ABSTRACTAlignment-free classification of sequences against collections of sequences has enabled high-throughput processing of sequencing data in many bioinformatics analysis pipelines. Originally hash-table based, much work has been done to improve and reduce the memory requirement of indexing of k-mer sequences with probabilistic indexing strategies. These efforts have led to lower memory highly efficient indexes, but often lack sensitivity in the face of sequencing errors or polymorphism because they are k-mer based. To address this, we designed a new memory efficient data structure that can tolerate mismatches using multiple spaced seeds, called a multi-index Bloom Filter. Implemented as part of BioBloom Tools, we demonstrate our algorithm in two applications, read binning for targeted assembly and taxonomic read assignment. Our tool shows a higher sensitivity and specificity for read-binning than BWA MEM at an order of magnitude less time. For taxonomic classification, we show higher sensitivity than CLARK-S at an order of magnitude less time while using half the memory.


2009 ◽  
Vol 29 (2) ◽  
pp. 500-502
Author(s):  
Deng PAN ◽  
Da-fang ZHANG ◽  
Kun XIE ◽  
Ji ZHANG

2018 ◽  
Vol 15 (10) ◽  
pp. 117-128 ◽  
Author(s):  
Jinyuan Zhao ◽  
Zhigang Hu ◽  
Bing Xiong ◽  
Keqin Li

Named Data Networking (NDN) is afast growing architecture, which is proposed as an alternative to existing IP. NDN allows users to request the data identified by a unique name without any information of the hosting entity. NDN supports in-network caching of contents, multi-path forwarding, and data security. In NDN, packet-forwarding decisions are driven by lookup operations on content name of the NDN packets. An NDN node maintains set of routing tables that aid in forwarding decisions. Forwarding the NDN packets depend on lookup of these NDN tables and performing Longest Prefix Matching (LPM) against these NDN tables. The NDN names are unbounded and of variable length. These features along with large and dynamic NDN tables pose several challenges that include increased memory requirement and delayed lookup operations. To this end, there is a need for an efficient data structure that support fast lookup operations with low memory overhead. Several lookup techniques are proposed in this direction. Traversing trie structures would be slow since every level of trie require a memory access. Hash tables incur additional hash computations on names and suffer from collisions. Bloom filters suffer from false positives and do not support deletions. Improving the performance of these structures can lead to a better lookup solution.This survey paper explores different lookup structures for NDN networks. Performance is measured with respect to lookup rate and memory efficiency.


Author(s):  
Jungwon Lee ◽  
Seoyeon Choi ◽  
Dayoung Kim ◽  
Yunyoung Choi ◽  
Wookyung Sun

Because the development of the internet of things (IoT) requires technology that transfers information between objects without human intervention, the core of IoT security will be secure authentication between devices or between devices and servers. Software-based authentication may be a security vulnerability in IoT, but hardware-based security technology can provide a strong security environment. A physical unclonable functions (PUFs) are a hardware security element suitable for lightweight applications. PUFs can generate challenge-response pairs(CRPs) that cannot be controlled or predicted by utilizing inherent physical variations that occur in the manufacturing process. In particular, pulse width memristive PUF (PWM-PUF) improves security performance by applying different write pulse widths and bank structures. Bloom filter (BF) is probabilistic data structures that answer membership queries using small memories. Bloom filter can improve search performance and reduce memory usage and are used in areas such as networking, security, big data, and IoT. In this paper, we propose a structure that applies Bloom filters based on the PWM-PUF to reduce PUF data transmission errors. The proposed structure uses two different Bloom filter types that store different information and that are located in front of and behind the PWM-PUF, improving security by removing challenges from attacker access. Simulation results show that the proposed structure decreases the data transmission error rate and reuse rate as the Bloom filter size increases, the simulation results also show that the proposed structure improves PWM-PUF security with a very small Bloom filter memory.


Author(s):  
Adityas Widjajarto ◽  
Muharman Lubis ◽  
Vreseliana Ayuningtyas

<p><span lang="EN-US">The rapid development of information technology has made security become extremely. Apart from easy access, there are also threats to vulnerabilities, with the number of cyber-attacks in 2019 showed a total of 1,494,281 around the world issued by the </span><span lang="EN-US">national cyber and crypto agency (BSSN) honeynet project. Thus, vulnerability analysis should be conducted to prepare worst case scenario by anticipating with proper strategy for responding the attacks. Actually, vulnerability is a system or design weakness that is used when an intruder executes commands, accesses unauthorized data, and carries out denial of service attacks. The study was performed using the AlienVault software as the vulnerability assessment. The results were analysed by the formula of risk estimation equal to the number of vulnerability found related to the threat. Meanwhile, threat is obtained from analysis of sample walkthroughs, as a reference for frequent exploitation. The risk estimation result indicate the 73 (seventy three) for the highest score of 5 (five) type risks identified while later on, it is used for re-analyzing based on the spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of prvilege (STRIDE) framework that indicated the network function does not accommodate the existing types of risk namely spoofing.</span></p>


Sign in / Sign up

Export Citation Format

Share Document