performance analysis
Recently Published Documents


(FIVE YEARS 11514)



Energy ◽  
2022 ◽  
Vol 243 ◽  
pp. 123125
Pengbo Dou ◽  
Teng Jia ◽  
Peng Chu ◽  
Yanjun Dai ◽  
Chunhui Shou

Fuel ◽  
2022 ◽  
Vol 313 ◽  
pp. 123018
W. Beno Wincy ◽  
M. Edwin ◽  
U. Arunachalam ◽  
S. Joseph Sekhar

2022 ◽  
Vol 18 (2) ◽  
pp. 1-24
Sourabh Kulkarni ◽  
Mario Michael Krell ◽  
Seth Nabarro ◽  
Csaba Andras Moritz

Epidemiology models are central to understanding and controlling large-scale pandemics. Several epidemiology models require simulation-based inference such as Approximate Bayesian Computation (ABC) to fit their parameters to observations. ABC inference is highly amenable to efficient hardware acceleration. In this work, we develop parallel ABC inference of a stochastic epidemiology model for COVID-19. The statistical inference framework is implemented and compared on Intel’s Xeon CPU, NVIDIA’s Tesla V100 GPU, Google’s V2 Tensor Processing Unit (TPU), and the Graphcore’s Mk1 Intelligence Processing Unit (IPU), and the results are discussed in the context of their computational architectures. Results show that TPUs are 3×, GPUs are 4×, and IPUs are 30× faster than Xeon CPUs. Extensive performance analysis indicates that the difference between IPU and GPU can be attributed to higher communication bandwidth, closeness of memory to compute, and higher compute power in the IPU. The proposed framework scales across 16 IPUs, with scaling overhead not exceeding 8% for the experiments performed. We present an example of our framework in practice, performing inference on the epidemiology model across three countries and giving a brief overview of the results.

2022 ◽  
Vol 13 (4) ◽  
pp. 101684
Omar Rafae Alomar ◽  
Hareth Maher Abd ◽  
Mothana M. Mohamed Salih ◽  
Firas Aziz Ali

2022 ◽  
Vol 170 ◽  
pp. 104697
Yujiang Jiang ◽  
Guangjian Wang ◽  
Qing Luo ◽  
Shuaidong Zou

Sign in / Sign up

Export Citation Format

Share Document