IMPROVING peLIFO CACHE REPLACEMENT POLICY: HARDWARE REDUCTION AND THREAD-AWARE EXTENSION

2014 ◽  
Vol 23 (04) ◽  
pp. 1450046
Author(s):  
ENRIQUE SEDANO ◽  
SILVIO SEPULVEDA ◽  
FERNANDO CASTRO ◽  
DANIEL CHAVER ◽  
RODRIGO GONZALEZ-ALBERQUILLA ◽  
...  

Studying blocks behavior during their lifetime in cache can provide useful information to reduce the miss rate and therefore improve processor performance. According to this rationale, the peLIFO replacement algorithm [M. Chaudhuri, Proc. Micro'09, New York, 12–16 December, 2009, pp. 401–412], which learns dynamically the number of cache ways required to satisfy short-term reuses preserving the remaining ways for long-term reuses, has been recently proposed. In this paper, we propose several changes to the original peLIFO policy in order to reduce the implementation complexity involved, and we extend the algorithm to a shared-cache environment considering dynamic information about threads behavior to improve cache efficiency. Experimental results confirm that our simplification techniques reduce the required hardware with a negligible performance penalty, while the best of our thread-aware extension proposals reduces average CPI by 8.7% and 15.2% on average compared to the original peLIFO and LRU respectively for a set of 43 multi-programmed workloads on an 8 MB 16-way set associative shared L2 cache.

Author(s):  
A. V. Vishnekov ◽  
E. M. Ivanova

The paper investigates the issues of increasing the performance of computing systems by improving the efficiency of cache memory, analyzes the efficiency indicators of replacement algorithms. We show the necessity of creation of automated or automatic means for cache memory tuning in the current conditions of program code execution, namely a dynamic cache replacement algorithms control by replacement of the current replacement algorithm by more effective one in current computation conditions. Methods development for caching policy control based on the program type definition: cyclic, sequential, locally-point, mixed. We suggest the procedure for selecting an effective replacement algorithm by support decision-making methods based on the current statistics of caching parameters. The paper gives the analysis of existing cache replacement algorithms. We propose a decision-making procedure for selecting an effective cache replacement algorithm based on the methods of ranking alternatives, preferences and hierarchy analysis. The critical number of cache hits, the average time of data query execution, the average cache latency are selected as indicators of initiation for the swapping procedure for the current replacement algorithm. The main advantage of the proposed approach is its universality. This approach assumes an adaptive decision-making procedure for the effective replacement algorithm selecting. The procedure allows the criteria variability for evaluating the replacement algorithms, its’ efficiency, and their preference for different types of program code. The dynamic swapping of the replacement algorithm with a more efficient one during the program execution improves the performance of the computer system.


2018 ◽  
Vol 27 (07) ◽  
pp. 1850114
Author(s):  
Cheng Qian ◽  
Libo Huang ◽  
Qi Yu ◽  
Zhiying Wang

Hardware prefetching has always been a crucial mechanism to improve processor performance. However, an efficient prefetch operation requires a guarantee of high prefetch accuracy; otherwise, it may degrade system performance. Prior studies propose an adaptive priority controlling method to make better use of prefetch accesses, which improves performance in two-level cache systems. However, this method does not perform well in a more complex memory hierarchy, such as a three-level cache system. Thus, it is still necessary to explore the efficiency of prefetch, in particular, in complex hierarchical memory systems. In this paper, we propose a composite hierarchy-aware method called CHAM, which works at the middle level cache (MLC). By using prefetch accuracy as an evaluation criterion, CHAM improves the efficiency of prefetch accesses based on (1) a dynamic adaptive prefetch control mechanism to schedule the priority and data transfer of prefetch accesses across the cache hierarchical levels in the runtime and (2) a prefetch efficiency-oriented hybrid cache replacement policy to select the most suitable policy. To demonstrate its effectiveness, we have performed extensive experiments on 28 benchmarks from SPEC CPU2006 and two benchmarks from BioBench. Compared with a similar adaptive method, CHAM improves the MLC demand hit rate by 9.2% and an improvement of 1.4% in system performance on average in a single-core system. On a 4-core system, CHAM improves the demand hit rate by 33.06% and improves system performance by 10.1% on average.


Author(s):  
Federico Varese

Organized crime is spreading like a global virus as mobs take advantage of open borders to establish local franchises at will. That at least is the fear, inspired by stories of Russian mobsters in New York, Chinese triads in London, and Italian mafias throughout the West. As this book explains, the truth is more complicated. The author has spent years researching mafia groups in Italy, Russia, the United States, and China, and argues that mafiosi often find themselves abroad against their will, rather than through a strategic plan to colonize new territories. Once there, they do not always succeed in establishing themselves. The book spells out the conditions that lead to their long-term success, namely sudden market expansion that is neither exploited by local rivals nor blocked by authorities. Ultimately the inability of the state to govern economic transformations gives mafias their opportunity. In a series of matched comparisons, the book charts the attempts of the Calabrese 'Ndrangheta to move to the north of Italy, and shows how the Sicilian mafia expanded to early twentieth-century New York, but failed around the same time to find a niche in Argentina. The book explains why the Russian mafia failed to penetrate Rome but succeeded in Hungary. A pioneering chapter on China examines the challenges that triads from Taiwan and Hong Kong find in branching out to the mainland. This book is both a compelling read and a sober assessment of the risks posed by globalization and immigration for the spread of mafias.


2020 ◽  
Author(s):  
Jessica Kasten ◽  
Elizabeth Lewis ◽  
Sari Lelchook ◽  
Lynn Feinberg ◽  
Edem Hado

1980 ◽  
Vol 1 (2) ◽  
pp. 145-159
Author(s):  
Edward F. Harris ◽  
Nicholas F. Bellantoni

Archaeologically defined inter-group differences in the Northeast subarea ate assessed with a phenetic analysis of published craniometric information. Spatial distinctions in the material culture are in good agreement with those defined by the cranial metrics. The fundamental dichotomy, between the Ontario Iroquois and the eastern grouping of New York and New England, suggests a long-term dissociation between these two groups relative to their ecologic adaptations, trade relationships, trait-list associations, and natural and cultural barriers to gene flow.


Author(s):  
Karen Ahlquist

This chapter charts how canonic repertories evolved in very different forms in New York City during the nineteenth century. The unstable succession of entrepreneurial touring troupes that visited the city adapted both repertory and individual pieces to the audience’s taste, from which there emerged a major theater, the Metropolitan Opera, offering a mix of German, Italian, and French works. The stable repertory in place there by 1910 resembles to a considerable extent that performed in the same theater today. Indeed, all of the twenty-five operas most often performed between 1883 and 2015 at the Metropolitan Opera were written before World War I. The repertory may seem haphazard in its diversity, but that very condition proved to be its strength in the long term. This chapter is paired with Benjamin Walton’s “Canons of real and imagined opera: Buenos Aires and Montevideo, 1810–1860.”


2021 ◽  
Vol 2 (3) ◽  
pp. 1-24
Author(s):  
Chih-Kai Huang ◽  
Shan-Hsiang Shen

The next-generation 5G cellular networks are designed to support the internet of things (IoT) networks; network components and services are virtualized and run either in virtual machines (VMs) or containers. Moreover, edge clouds (which are closer to end users) are leveraged to reduce end-to-end latency especially for some IoT applications, which require short response time. However, the computational resources are limited in edge clouds. To minimize overall service latency, it is crucial to determine carefully which services should be provided in edge clouds and serve more mobile or IoT devices locally. In this article, we propose a novel service cache framework called S-Cache , which automatically caches popular services in edge clouds. In addition, we design a new cache replacement policy to maximize the cache hit rates. Our evaluations use real log files from Google to form two datasets to evaluate the performance. The proposed cache replacement policy is compared with other policies such as greedy-dual-size-frequency (GDSF) and least-frequently-used (LFU). The experimental results show that the cache hit rates are improved by 39% on average, and the average latency of our cache replacement policy decreases 41% and 38% on average in these two datasets. This indicates that our approach is superior to other existing cache policies and is more suitable in multi-access edge computing environments. In the implementation, S-Cache relies on OpenStack to clone services to edge clouds and direct the network traffic. We also evaluate the cost of cloning the service to an edge cloud. The cloning cost of various real applications is studied by experiments under the presented framework and different environments.


Sign in / Sign up

Export Citation Format

Share Document