Evaluating POWER Architecture for Distributed Training of Generative Adversarial Networks

Author(s):  
Ahmad Hesam ◽  
Sofia Vallecorsa ◽  
Gulrukh Khattak ◽  
Federico Carminati
Author(s):  
Sofia Vallecorsa ◽  
Federico Carminati ◽  
Gulrukh Khattak ◽  
Damian Podareanu ◽  
Valeriu Codreanu ◽  
...  

2019 ◽  
Vol 214 ◽  
pp. 06025
Author(s):  
Jean-Roch Vlimant ◽  
Felice Pantaleo ◽  
Maurizio Pierini ◽  
Vladimir Loncar ◽  
Sofia Vallecorsa ◽  
...  

In recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collider environment. Training of neural network models has been made tractable with the improvement of optimization methods and the advent of GP-GPU well adapted to tackle the highly-parallelizable task of training neural nets. Despite these advancements, training of large models over large data sets can take days to weeks. Even more so, finding the best model architecture and settings can take many expensive trials. To get the best out of this new technology, it is important to scale up the available network-training resources and, consequently, to provide tools for optimal large-scale distributed training. In this context, our development of a new training workflow, which scales on multi-node/multi-GPU architectures with an eye to deployment on high performance computing machines is described. We describe the integration of hyper parameter optimization with a distributed training framework using Message Passing Interface, for models defined in keras [12] or pytorch [13]. We present results on the speedup of training generative adversarial networks trained on a data set composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter.


2019 ◽  
Vol 214 ◽  
pp. 09005 ◽  
Author(s):  
Steven Farrell ◽  
Wahid Bhimji ◽  
Thorsten Kurth ◽  
Mustafa Mustafa ◽  
Deborah Bard ◽  
...  

Initial studies have suggested generative adversarial networks (GANs) have promise as fast simulations within HEP. These studies, while promising, have been insufficiently precise and also, like GANs in general, suffer from stability issues.We apply GANs to to generate full particle physics events (not individual physics objects), explore conditioning of generated events based on physics theory parameters and evaluate the precision and generalization of the produced datasets. We apply this to SUSY mass parameter interpolation and pileup generation. We also discuss recent developments in convergence and representations that match the structure of the detector better than images.In addition we describe on-going work making use of large-scale distributed resources on the Cori supercomputer at NERSC, and developments to control distributed training via interactive jupyter notebook sessions. This will allow tackling high-resolution detector data; model selection and hyper-parameter tuning in a productive yet scalable deep learning environment.


2020 ◽  
Author(s):  
Federico Carminati ◽  
Sofia Vallecorsa ◽  
Gulrukh Khattak ◽  
Valeriu Codreanu ◽  
Damian Podareanu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document