scholarly journals Designing Cuckoo Based Pending Interest Table for CCN Networks

Author(s):  
Mohammad Alhisnawi ◽  
Aladdin Abdulhassan

<p class="JESTECAbstract">Content Centric Networking (CCN) is a modern architecture that got wide attention in the current researches as a substitutional for the current IP-based architecture. Many studies have been investigated on this novel architecture but only little of them focused on Pending Interest Table (PIT) which is very important component in every CCN router. PIT has fundamental role in packet processing in both upstream process (Interest packets) and downstream process (Data packets). PIT must be fast enough in order to not become an obstruction in the packet processing and also it must be big enough to save a lot of incoming information. In this paper, we suggest a new PIT design and implementation named CF-PIT for CCN router. Our PIT design depends on modifying and utilizing an approximate data structure called Cuckoo filter (CF). Cuckoo filter has ideal characteristics like: high insertion/query/deletion performance, acceptable storage demands and false positive probability which make it with our modification convenient for PIT implementation. The experimental results showed that our CF-PIT design has high performance in different side of views which make it very suitable to be implemented on CCN routers.</p>

2016 ◽  
Vol 51 (4) ◽  
pp. 67-81 ◽  
Author(s):  
Antoine Kaufmann ◽  
SImon Peter ◽  
Naveen Kr. Sharma ◽  
Thomas Anderson ◽  
Arvind Krishnamurthy

2016 ◽  
Vol 44 (2) ◽  
pp. 67-81 ◽  
Author(s):  
Antoine Kaufmann ◽  
SImon Peter ◽  
Naveen Kr. Sharma ◽  
Thomas Anderson ◽  
Arvind Krishnamurthy

2021 ◽  
Vol 13 (3) ◽  
pp. 78
Author(s):  
Chuanhong Li ◽  
Lei Song ◽  
Xuewen Zeng

The continuous increase in network traffic has sharply increased the demand for high-performance packet processing systems. For a high-performance packet processing system based on multi-core processors, the packet scheduling algorithm is critical because of the significant role it plays in load distribution, which is related to system throughput, attracting intensive research attention. However, it is not an easy task since the canonical flow-level packet scheduling algorithm is vulnerable to traffic locality, while the packet-level packet scheduling algorithm fails to maintain cache affinity. In this paper, we propose an adaptive throughput-first packet scheduling algorithm for DPDK-based packet processing systems. Combined with the feature of DPDK burst-oriented packet receiving and transmitting, we propose using Subflow as the scheduling unit and the adjustment unit making the proposed algorithm not only maintain the advantages of flow-level packet scheduling algorithms when the adjustment does not happen but also avoid packet loss as much as possible when the target core may be overloaded Experimental results show that the proposed method outperforms Round-Robin, HRW (High Random Weight), and CRC32 on system throughput and packet loss rate.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Mahdi Torabzadehkashi ◽  
Siavash Rezaei ◽  
Ali HeydariGorji ◽  
Hosein Bobarshad ◽  
Vladimir Alves ◽  
...  

AbstractIn the era of big data applications, the demand for more sophisticated data centers and high-performance data processing mechanisms is increasing drastically. Data are originally stored in storage systems. To process data, application servers need to fetch them from storage devices, which imposes the cost of moving data to the system. This cost has a direct relation with the distance of processing engines from the data. This is the key motivation for the emergence of distributed processing platforms such as Hadoop, which move process closer to data. Computational storage devices (CSDs) push the “move process to data” paradigm to its ultimate boundaries by deploying embedded processing engines inside storage devices to process data. In this paper, we introduce Catalina, an efficient and flexible computational storage platform, that provides a seamless environment to process data in-place. Catalina is the first CSD equipped with a dedicated application processor running a full-fledged operating system that provides filesystem-level data access for the applications. Thus, a vast spectrum of applications can be ported for running on Catalina CSDs. Due to these unique features, to the best of our knowledge, Catalina CSD is the only in-storage processing platform that can be seamlessly deployed in clusters to run distributed applications such as Hadoop MapReduce and HPC applications in-place without any modifications on the underlying distributed processing framework. For the proof of concept, we build a fully functional Catalina prototype and a CSD-equipped platform using 16 Catalina CSDs to run Intel HiBench Hadoop and HPC benchmarks to investigate the benefits of deploying Catalina CSDs in the distributed processing environments. The experimental results show up to 2.2× improvement in performance and 4.3× reduction in energy consumption, respectively, for running Hadoop MapReduce benchmarks. Additionally, thanks to the Neon SIMD engines, the performance and energy efficiency of DFT algorithms are improved up to 5.4× and 8.9×, respectively.


Author(s):  
Salvatore Di Girolamo ◽  
Andreas Kurth ◽  
Alexandru Calotoiu ◽  
Thomas Benz ◽  
Timo Schneider ◽  
...  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yating Li ◽  
Chi Zhou ◽  
Di Wu ◽  
Min Chen

PurposeAdvances in information technology now permit the recording of massive and diverse process data, thereby making data-driven evaluations possible. This study discusses whether teachers’ information literacy can be evaluated based on their online information behaviors on online learning and teaching platforms (OLTPs).Design/methodology/approachFirst, to evaluate teachers’ information literacy, the process data were combined from teachers on OLTP to describe nine third-level indicators from the richness, diversity, usefulness and timeliness analysis dimensions. Second, propensity score matching (PSM) and difference tests were used to analyze the differences between the performance groups with reduced selection bias. Third, to effectively predict the information literacy score of each teacher, four sets of input variables were used for prediction using supervised learning models.FindingsThe results show that the high-performance group performs better than the low-performance group in 6 indicators. In addition, information-based teaching and behavioral research data can best reflect the level of information literacy. In the future, greater in-depth explorations are needed with richer online information behavioral data and a more effective evaluation model to increase evaluation accuracy.Originality/valueThe evaluation based on online information behaviors has concrete application scenarios, positively correlated results and prediction interpretability. Therefore, information literacy evaluations based on behaviors have great potential and favorable prospects.


Author(s):  
Mitsutaka Kimura ◽  
Mitsuhiro Imaizumi ◽  
Takahito Araki

Code error correction methods have been important techniques at a radio environment and video stream transmission. In general, when a server transmits some data packets to a client, the server resends the only loss packets. But in this method, a delay occurs in a transmission. In order to prevent the transmission delay, the loss packets are restored by the error correction packet on a client side. The code error correction method is called Hybrid Automatic Repeat reQuest (ARQ) and has been researched. On the other hand, congestion control schemes have been important techniques at a data communication. Some packet losses are generated by network congestion. In order to prevent some packet losses, the congestion control performs by prolonging packet transmission intervals, which is called High-performance and Flexible Protocol (HpFP). In this paper, we present a stochastic model of congestion control based on packet transmission interval with Hybrid ARQ for data transmission. That is, if the packet loss occurs, the data packet received in error is restored by the error correction packet. Moreover, if errors occur in data packets, the congestion control performs by prolonging packet transmission intervals. The mean time until packet transmissions succeed is derived analytically, and a window size which maximizes the quantity of packets per unit of time until the transmission succeeds is discussed.


2014 ◽  
Vol 575 ◽  
pp. 848-853
Author(s):  
Kai Zhang ◽  
Guo Xi Li ◽  
Jing Zhong Gong ◽  
Bao Zhong Wu ◽  
Meng Zhang ◽  
...  

Due to lack of considering the non-geometric process parameters during assembly process planning, it is difficult to control the production cycle, cost, quality, reliability, stability and consistency of high-performance mechanical systems. To change this situation, a prototype software is developed by taking into account the non-geometric process parameters. Based on the alignment information, this paper concentrates on the data modeling of the system. With the system, the manufacturing process can achieve the assembly materials management, assembly process planning, alignment process monitoring, alignment data collection and statistical analysis. After analyzing the process data, design parameters will be refined and assembly performance will be optimized.


Sign in / Sign up

Export Citation Format

Share Document