scholarly journals Scheduling Local and Remote Memory in Cluster Computers

Author(s):  
Mónica Serrano Gómez
2011 ◽  
Vol 59 (3) ◽  
pp. 1533-1551 ◽  
Author(s):  
Mónica Serrano ◽  
Julio Sahuquillo ◽  
Salvador Petit ◽  
Houcine Hassan ◽  
José Duato

Author(s):  
Monica Serrano ◽  
Salvador Petit ◽  
Julio Sahuquillo ◽  
Rafael Ubal ◽  
Houcine Hassan ◽  
...  

Author(s):  
Gisella Vetere ◽  
Frances Xia ◽  
Adam I. Ramsaran ◽  
Lina M. Tran ◽  
Sheena A. Josselyn ◽  
...  

2021 ◽  
Vol 11 (14) ◽  
pp. 6486
Author(s):  
Mei-Ling Chiang ◽  
Wei-Lun Su

NUMA multi-core systems divide system resources into several nodes. When an imbalance in the load between cores occurs, the kernel scheduler’s load balancing mechanism then migrates threads between cores or across NUMA nodes. Remote memory access is required for a thread to access memory on the previous node, which degrades performance. Threads to be migrated must be selected effectively and efficiently since the related operations run in the critical path of the kernel scheduler. This study focuses on improving inter-node load balancing for multithreaded applications. We propose a thread-aware selection policy that considers the distribution of threads on nodes for each thread group while migrating one thread for inter-node load balancing. The thread is selected for which its thread group has the least exclusive thread distribution, and thread members are distributed more evenly on nodes. This has less influence on data mapping and thread mapping for the thread group. We further devise several enhancements to eliminate superfluous evaluations for multithreaded processes, so the selection procedure is more efficient. The experimental results for the commonly used PARSEC 3.0 benchmark suite show that the modified Linux kernel with the proposed selection policy increases performance by 10.7% compared with the unmodified Linux kernel.


2021 ◽  
Vol 49 (4) ◽  
pp. 12-17
Author(s):  
Feilong Liu ◽  
Claude Barthels ◽  
Spyros Blanas ◽  
Hideaki Kimura ◽  
Garret Swart

Networkswith Remote DirectMemoryAccess (RDMA) support are becoming increasingly common. RDMA, however, offers a limited programming interface to remote memory that consists of read, write and atomic operations. With RDMA alone, completing the most basic operations on remote data structures often requires multiple round-trips over the network. Data-intensive systems strongly desire higher-level communication abstractions that supportmore complex interaction patterns. A natural candidate to consider is MPI, the de facto standard for developing high-performance applications in the HPC community. This paper critically evaluates the communication primitives of MPI and shows that using MPI in the context of a data processing system comes with its own set of insurmountable challenges. Based on this analysis, we propose a new communication abstraction named RDMO, or Remote DirectMemory Operation, that dispatches a short sequence of reads, writes and atomic operations to remote memory and executes them in a single round-trip.


Sign in / Sign up

Export Citation Format

Share Document