Cori: Dancing to the Right Beat of Periodic Data Movements over Hybrid Memory Systems

Author(s):  
Thaleia Dimitra Doudali ◽  
Daniel Zahka ◽  
Ada Gavrilovska
2019 ◽  
Vol 16 (2) ◽  
pp. 1-26 ◽  
Author(s):  
Xiaoyuan Wang ◽  
Haikun Liu ◽  
Xiaofei Liao ◽  
Ji Chen ◽  
Hai Jin ◽  
...  

Author(s):  
M. Ben Olson ◽  
Tong Zhou ◽  
Michael R. Jantz ◽  
Kshitij A. Doshi ◽  
M. Graham Lopez ◽  
...  
Keyword(s):  

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Sol Ji Kang ◽  
Sang Yeon Lee ◽  
Keon Myung Lee

With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.


Author(s):  
Evangelos Vasilakis ◽  
Vassilis Papaefstathiou ◽  
Pedro Trancoso ◽  
Ioannis Sourdis

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 103517-103529
Author(s):  
Wei Liu ◽  
Haikun Liu ◽  
Xiaofei Liao ◽  
Hai Jin ◽  
Yu Zhang

Sign in / Sign up

Export Citation Format

Share Document