Evaluation of OpenMP SIMD Directives on Xeon Phi Coprocessors

Author(s):  
Christian Ponte ◽  
Jorge Gonzalez-Dominguez ◽  
Maria J. Martin
Keyword(s):  
2018 ◽  
Vol 175 ◽  
pp. 02009
Author(s):  
Carleton DeTar ◽  
Steven Gottlieb ◽  
Ruizi Li ◽  
Doug Toussaint

With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.


Author(s):  
Adam Jundt ◽  
Ananta Tiwari ◽  
William A. Ward ◽  
Roy Campbell ◽  
Laura Carrington
Keyword(s):  

Author(s):  
Arunmoezhi Ramachandran ◽  
Jerome Vienne ◽  
Rob Van Der Wijngaart ◽  
Lars Koesterke ◽  
Ilya Sharapov

2017 ◽  
Vol 2017 ◽  
pp. 1-8
Author(s):  
Cem Bozkus ◽  
Basilio B. Fraguela

In recent years, vast amounts of data of different kinds, from pictures and videos from our cameras to software logs from sensor networks and Internet routers operating day and night, are being generated. This has led to new big data problems, which require new algorithms to handle these large volumes of data and as a result are very computationally demanding because of the volumes to process. In this paper, we parallelize one of these new algorithms, namely, the HyperLogLog algorithm, which estimates the number of different items in a large data set with minimal memory usage, as it lowers the typical memory usage of this type of calculation from O(n) to O(1). We have implemented parallelizations based on OpenMP and OpenCL and evaluated them in a standard multicore system, an Intel Xeon Phi, and two GPUs from different vendors. The results obtained in our experiments, in which we reach a speedup of 88.6 with respect to an optimized sequential implementation, are very positive, particularly taking into account the need to run this kind of algorithm on large amounts of data.


Sign in / Sign up

Export Citation Format

Share Document