Path Sensitive Cache Analysis Using Cache Miss Paths

Author(s):  
Kartik Nagar ◽  
Y. N. Srikant
Keyword(s):  
2013 ◽  
Vol 380-384 ◽  
pp. 1969-1972
Author(s):  
Bo Yuan ◽  
Jin Dou Fan ◽  
Bin Liu

Traditional network processors (NPs) adopt either local memory mechanism or cache mechanism as the hierarchical memory structure. The local memory mechanism usually has small on-chip memory space which is not fit for the various complicated applications. The cache mechanism is better at dealing with the temporary data which need to be read and written frequently. But in deep packet processing, cache miss occurs when reading each segment of packet. We propose a cooperative mechanism of local memory and cache. In which the packet data and temporary data are stored into local memory and cache respectively. The analysis and experimental evaluation shows that the cooperative mechanism can improve the performance of network processors and reduce processing latency with little extra resources cost.


2011 ◽  
Vol 8 (3) ◽  
pp. 7-10 ◽  
Author(s):  
Tomasz Dudziak ◽  
Jörg Herter

Author(s):  
Valentin Touzeau ◽  
Claire Maïza ◽  
David Monniaux ◽  
Jan Reineke
Keyword(s):  

2012 ◽  
Vol 2012 ◽  
pp. 1-12 ◽  
Author(s):  
Shaily Mittal ◽  
Nitin

Nowadays, Multiprocessor System-on-Chip (MPSoC) architectures are mainly focused on by manufacturers to provide increased concurrency, instead of increased clock speed, for embedded systems. However, managing concurrency is a tough task. Hence, one major issue is to synchronize concurrent accesses to shared memory. An important characteristic of any system design process is memory configuration and data flow management. Although, it is very important to select a correct memory configuration, it might be equally imperative to choreograph the data flow between various levels of memory in an optimal manner. Memory map is a multiprocessor simulator to choreograph data flow in individual caches of multiple processors and shared memory systems. This simulator allows user to specify cache reconfigurations and number of processors within the application program and evaluates cache miss and hit rate for each configuration phase taking into account reconfiguration costs. The code is open source and in java.


2020 ◽  
Vol 10 (7) ◽  
pp. 2226
Author(s):  
Junghwan Kim ◽  
Myeong-Cheol Ko ◽  
Jinsoo Kim ◽  
Moon Sun Shin

This paper proposes an elaborate route prefix caching scheme for fast packet forwarding in named data networking (NDN) which is a next-generation Internet structure. The name lookup is a crucial function of the NDN router, which delivers a packet based on its name rather than IP address. It carries out a complex process to find the longest matching prefix for the content name. Even the size of a name prefix is variable and unbounded; thus, the name lookup is to be more complicated and time-consuming. The name lookup can be sped up by using route prefix caching, but it may cause a problem when non-leaf prefixes are cached. The proposed prefix caching scheme can cache non-leaf prefixes, as well as leaf prefixes, without incurring any problem. For this purpose, a Bloom filter is kept for each prefix. The Bloom filter, which is widely used for checking membership, is utilized to indicate the branch information of a non-leaf prefix. The experimental result shows that the proposed caching scheme achieves a much higher hit ratio than other caching schemes. Furthermore, how much the parameters of the Bloom filter affect the cache miss count is quantitatively evaluated. The best performance can be achieved with merely 8-bit Bloom filters and two hash functions.


Sign in / Sign up

Export Citation Format

Share Document