cache miss
Recently Published Documents


TOTAL DOCUMENTS

110
(FIVE YEARS 19)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 11 (21) ◽  
pp. 9981
Author(s):  
Ozoda Makhkamova ◽  
Doohyun Kim

Chatbot technologies have made our lives easier. To create a chatbot with high intelligence, a significant amount of knowledge processing is required. However, this can slow down the reaction time; hence, a mechanism to enable a quick response is needed. This paper proposes a cache mechanism to improve the response time of the chatbot service; while the cache in CPU utilizes the locality of references within binary code executions, our cache mechanism for chatbots uses the frequency and relevance information which potentially exists within the set of Q&A pairs. The proposed idea is to enable the broker in a multi-layered structure to analyze and store the keyword-wise relevance of the set of Q&A pairs from chatbots. In addition, the cache mechanism accumulates the frequency of the input questions by monitoring the conversation history. When a cache miss occurs, the broker selects a chatbot according to the frequency and relevance, and then delivers the query to the selected chatbot to obtain a response for answer. This mechanism showed a significant increase in the cache hit ratio as well as an improvement in the average response time.


2021 ◽  
Vol 7 ◽  
pp. e435
Author(s):  
Adnan Mahmood Qureshi ◽  
Nadeem Anjum ◽  
Rao Naveed Bin Rais ◽  
Masood Ur-Rehman ◽  
Amir Qayyum

As a promising next-generation network architecture, named data networking (NDN) supports name-based routing and in-network caching to retrieve content in an efficient, fast, and reliable manner. Most of the studies on NDN have proposed innovative and efficient caching mechanisms and retrieval of content via efficient routing. However, very few studies have targeted addressing the vulnerabilities in NDN architecture, which a malicious node can exploit to perform a content poisoning attack (CPA). This potentially results in polluting the in-network caches, the routing of content, and consequently isolates the legitimate content in the network. In the past, several efforts have been made to propose the mitigation strategies for the content poisoning attack, but to the best of our knowledge, no specific work has been done to address an emerging attack-surface in NDN, which we call an interest flooding attack. Handling this attack-surface can potentially make content poisoning attack mitigation schemes more effective, secure, and robust. Hence, in this article, we propose the addition of a security mechanism in the CPA mitigation scheme that is, Name-Key Based Forwarding and Multipath Forwarding Based Inband Probe, in which we block the malicious face of compromised consumers by monitoring the Cache-Miss Ratio values and the Queue Capacity at the Edge Routers. The malicious face is blocked when the cache-miss ratio hits the threshold value, which is adjusted dynamically through monitoring the cache-miss ratio and queue capacity values. The experimental results show that we are successful in mitigating the vulnerability of the CPA mitigation scheme by detecting and blocking the flooding interface, at the cost of very little verification overhead at the NDN Routers.


Author(s):  
Tiancheng Qin ◽  
S. Rasoul Etesami

We consider a generalization of the standard cache problem called file-bundle caching, where different queries (tasks), each containing l ≥ 1 files, sequentially arrive. An online algorithm that does not know the sequence of queries ahead of time must adaptively decide on what files to keep in the cache to incur the minimum number of cache misses. Here a cache miss refers to the case where at least one file in a query is missing among the cache files. In the special case where l = 1, this problem reduces to the standard cache problem. We first analyze the performance of the classic least recently used (LRU) algorithm in this setting and show that LRU is a near-optimal online deterministic algorithm for file-bundle caching with regard to competitive ratio. We then extend our results to a generalized ( h,k )-paging problem in this file-bundle setting, where the performance of the online algorithm with a cache size k is compared to an optimal offline benchmark of a smaller cache size h < k . In this latter case, we provide a randomized O ( l ln k / k-h )-competitive algorithm for our generalized ( h, k )-paging problem, which can be viewed as an extension of the classic marking algorithm . We complete this result by providing a matching lower bound for the competitive ratio, indicating that the performance of this modified marking algorithm is within a factor of 2 of any randomized online algorithm. Finally, we look at the distributed version of the file-bundle caching problem where there are m ≥ 1 identical caches in the system. In this case, we show that for m = l + 1 caches, there is a deterministic distributed caching algorithm that is ( l 2 + l )-competitive and a randomized distributed caching algorithm that is O ( l ln ( 2l + 1)-competitive when l ≥ 2. We also provide a general framework to devise other efficient algorithms for the distributed file-bundle caching problem and evaluate the performance of our results through simulations.


Author(s):  
Viktor Shamparov ◽  
Murad Neiman-Zade

In this research-in-progress report, we propose a novel approach to unified cache usage analysis for implementing data layout optimizations in the LCC compiler for the Elbrus and SPARC architectures. The approach consists of three parts. The first part is generalizing two methods of estimating cache miss amount and choosing more applicable one in the compiler. The second part is finding an applicable solution for the problem of cache miss amount minimization. The third part is implementing this analysis in the compiler and using analysis results for data layout transformations.


2020 ◽  
Vol 10 (7) ◽  
pp. 2226
Author(s):  
Junghwan Kim ◽  
Myeong-Cheol Ko ◽  
Jinsoo Kim ◽  
Moon Sun Shin

This paper proposes an elaborate route prefix caching scheme for fast packet forwarding in named data networking (NDN) which is a next-generation Internet structure. The name lookup is a crucial function of the NDN router, which delivers a packet based on its name rather than IP address. It carries out a complex process to find the longest matching prefix for the content name. Even the size of a name prefix is variable and unbounded; thus, the name lookup is to be more complicated and time-consuming. The name lookup can be sped up by using route prefix caching, but it may cause a problem when non-leaf prefixes are cached. The proposed prefix caching scheme can cache non-leaf prefixes, as well as leaf prefixes, without incurring any problem. For this purpose, a Bloom filter is kept for each prefix. The Bloom filter, which is widely used for checking membership, is utilized to indicate the branch information of a non-leaf prefix. The experimental result shows that the proposed caching scheme achieves a much higher hit ratio than other caching schemes. Furthermore, how much the parameters of the Bloom filter affect the cache miss count is quantitatively evaluated. The best performance can be achieved with merely 8-bit Bloom filters and two hash functions.


Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 112
Author(s):  
Takanori Nakazawa ◽  
Suhua Tang ◽  
Sadao Obana

Recently, inter-vehicle communication, which helps to avoid collision accidents (by driving safety support system) and facilitate self-driving (by dissemination of road and traffic information), has attracted much attention. In this paper, in order to efficiently collect road/traffic information in the request/response manner, first a basic method, Content-centric network (CCN) for Vehicular network (CV), is proposed, which applies CCN cache function to inter-vehicle communication. Content naming and routing, which take vehicle mobility into account, are investigated. On this basis, the CV method is extended (called ECV) to avoid the cache miss problem caused by vehicle movement, and is further enhanced (called ECV+) to more efficiently exploit cache buffer in vehicles, caching content according to a probability decided by a channel usage rate. Extensive evaluations on the network simulator Scenargie, with a realistic open street map, confirm that the CV method and its extensions (ECV, ECV+) effectively reduce the average number of hops of data packets (by up to 47%, 63%, and 83%, respectively) and greatly improve the content acquisition success rate (by up to 356%, 444%, and 689%, respectively), compared to the method without a cache mechanism.


Author(s):  
A. A. Prihozhy

Caches are intermediate level between fast CPU and slow main memory. It aims to store copies of frequently used data and to reduce the access time to the main memory. Caches are capable of exploiting temporal and spatial localities during program execution. When the processor accesses memory, the cache behavior depends on if the data is in cache: a cache hit occurs if it is, and, a cache miss occurs, otherwise. In the last case, the cache may have to evict other data. The misses produce processor stalls and slow down the computations. The replacement policy chooses a data to evict, trying to predict the future accesses to memory. The hit and miss rate depends on the cache type: direct mapped, set associative and fully associative cache. The least recently used replacement policy serves the sets. The miss rate strongly depends on the executed algorithm. The all pairs shortest paths algorithms solve many practical problems, and it is important to know what algorithm and what cache type match best. This paper presents a technique of simulating the direct mapped, k-way associative and fully associative cache during the algorithm execution, to measure the frequency of read data to cache and write data to memory operations. We have measured the frequencies versus the cache size, the data block size, the amount of processed data, the type of cache, and the type of algorithm. After comparing the basic and blocked Floyd-Warshall algorithms, we conclude that the blocked algorithm well localizes data accesses within one block, but it does not localize data dependencies among blocks. The direct mapped cache significantly loses the associative cache; we can improve its performance by appropriate mapping virtual addresses to physical locations.


Sign in / Sign up

Export Citation Format

Share Document