Hybrid Parallel FEM on PC Cluster with Multi-core Processor

Author(s):  
T. Yamaguchi ◽  
Y. Kawase ◽  
T. Murase
2019 ◽  
Vol 139 (7) ◽  
pp. 802-811
Author(s):  
Kenta Fujimoto ◽  
Shingo Oidate ◽  
Yuhei Yabuta ◽  
Atsuyuki Takahashi ◽  
Takuya Yamasaki ◽  
...  

2010 ◽  
Vol 30 (2) ◽  
pp. 547-550 ◽  
Author(s):  
Shuai PENG ◽  
Dong-mei LI ◽  
Zhao-hui LI

2021 ◽  
Author(s):  
Bashar Romanous ◽  
Skyler Windh ◽  
Ildar Absalyamov ◽  
Prerna Budhkar ◽  
Robert Halstead ◽  
...  

AbstractThe join and group-by aggregation are two memory intensive operators that are affecting the performance of relational databases. Hashing is a common approach used to implement both operators. Recent paradigm shifts in multi-core processor architectures have reinvigorated research into how the join and group-by aggregation operators can leverage these advances. However, the poor spatial locality of the hashing approach has hindered performance on multi-core processor architectures which rely on using large cache hierarchies for latency mitigation. Multithreaded architectures can better cope with poor spatial locality by masking memory latency with many outstanding requests. Nevertheless, the number of parallel threads, even in the most advanced multithreaded processors, such as UltraSPARC, is not enough to fully cover the main memory access latency. In this paper, we explore the hardware re-configurability of FPGAs to enable deeper execution pipelines that maintain hundreds (instead of tens) of outstanding memory requests across four FPGAs-drastically increasing concurrency and throughput. We present two end-to-end in-memory accelerators for the join and group-by aggregation operators using FPGAs. Both accelerators use massive multithreading to mask long memory delays of traversing linked-list data structures, while concurrently managing hundreds of thread states across four FPGAs locally. We explore how content addressable memories can be intermixed within our multithreaded designs to act as a synchronizing cache, which enforces locks and merges jobs together before they are written to memory. Throughput results for our hash-join operator accelerator show a speedup between 2$$\times $$ × and 3.4$$\times $$ × over the best multi-core approaches with comparable memory bandwidths on uniform and skewed datasets. The accelerator for the hash-based group-by aggregation operator demonstrates that leveraging CAMs achieves average speedup of 3.3$$\times $$ × with a best case of 9.4$$\times $$ × in terms of throughput over CPU implementations across five types of data distributions.


2003 ◽  
Vol 15 (03) ◽  
pp. 109-114
Author(s):  
YANG-YAO NIU ◽  
SHOU-CHENG TCHENG

In this study, a parallel computing technology is applied on the simulation of aortic blood flow problems. A third-order upwind flux extrapolation with a dual-time integration method based on artificial compressibility solver is used to solve the Navier-Stokes equations. The original FORTRAN code is converted to the MPI code and tested on a 64-CPU IBM SP2 parallel computer and a 32-node PC Cluster. The test results show that a significant reduction of computing time in running the model and a super-linear speed up rate is achieved up to 32 CPUs at PC cluster. The speed up rate is as high as 49 for using IBM SP2 64 processors. The test shows very promising potential of parallel processing to provide prompt simulation of the current aortic flow problems.


Sign in / Sign up

Export Citation Format

Share Document