cache efficiency
Recently Published Documents


TOTAL DOCUMENTS

30
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Ke Yang ◽  
Xiaosong Ma ◽  
Saravanan Thirumuruganathan ◽  
Kang Chen ◽  
Yongwei Wu

2021 ◽  
Author(s):  
Eduardo Romero-Gainza ◽  
Christopher Stewart ◽  
Angela Li ◽  
Kyle Hale ◽  
Nathaniel Morris

Author(s):  
Yaru Fu ◽  
Quan Yu ◽  
Angus K. Y. Wong ◽  
Zheng Shi ◽  
Hong Wang ◽  
...  

2020 ◽  
Vol 245 ◽  
pp. 03011
Author(s):  
Maiken Pedersen ◽  
Balazs Konya ◽  
David Cameron ◽  
Mattias Ellert ◽  
Aleksandr Konstantinov ◽  
...  

The Worldwide LHC Computing Grid (WLCG) is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC-CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the worker nodes. The NT1 runs ARC in the native Nordugrid mode which contrary to the Pilot mode leaves jobs data transfers up to ARC. ARCs data transfer capabilities together with the ARC Cache are the most important features of ARC. In this article we will describe the datastaging and cache functionality of the ARC-CE set up as an edge service to an HPC or cloud resource, and show the gain in efficiency this model provides compared to a traditional pilot model, especially for sites with remote storage.


2019 ◽  
Vol 25 (4) ◽  
pp. 216-222
Author(s):  
D. R. Potapov ◽  

Author(s):  
XI LUO ◽  
YING AN

Content centric networking (CCN) is a new networking paradigm to meet the growing demand for content access in the future. Because of its important role in accelerating content retrieval and reducing network transmission load, in-network caching has become one of the core technologies in CCN and has attracted wide attention. The existing caching schemes often lack sufficient consideration of node cache status and the temporal validity of user requests, and thus the cache efficiency of the network is greatly reduced. In this paper, a cache pressure-aware caching scheme is proposed, which comprehensively takes into account various factors such as content popularity, cache occupancy rate, cache replacement rate, and the validity period of the Interest packet to achieve reasonable cache placement and replacement. Simulation results show that the proposed scheme effectively improves the cache hit rate and the resource utilization while decreasing the average response hops.


2019 ◽  
Vol 159 ◽  
pp. 1182-1189
Author(s):  
Avelino Palma Pimenta Junior ◽  
Jair Minoro Abe

10.29007/jhd7 ◽  
2018 ◽  
Author(s):  
Armin Biere

One of the design principles of the state-of-the-art SAT solver Lingeling is to use as compact data structures as possible. These reduce memory usage, increase cache efficiency and thus improve run-time, particularly, when using multiple solver instances on multi-core machines, as in our parallel portfolio solver Plingeling and our cube and conquer solver Treengeling. The scheduler of a dozen inprocessing algorithms is an important aspect of Lingeling as well. In this talk we explain these design and implementation aspects of Lingeling and discuss new direction of solver design.


Sign in / Sign up

Export Citation Format

Share Document