scholarly journals How to GAN LHC events

2019 ◽  
Vol 7 (6) ◽  
Author(s):  
Anja Butter ◽  
Tilman Plehn ◽  
Ramon Winterhalder

Event generation for the LHC can be supplemented by generative adversarial networks, which generate physical events and avoid highly inefficient event unweighting. For top pair production we show how such a network describes intermediate on-shell particles, phase space boundaries, and tails of distributions. In particular, we introduce the maximum mean discrepancy to resolve sharp local features. It can be extended in a straightforward manner to include for instance off-shell contributions, higher orders, or approximate detector effects.

2010 ◽  
Vol 82 (1) ◽  
Author(s):  
André H. Hoang ◽  
Christoph J. Reißer ◽  
Pedro Ruiz-Femenía

2021 ◽  
pp. 107754632199356
Author(s):  
Jia Luo ◽  
Jinying Huang ◽  
Jiancheng Ma ◽  
Hongmei Li

Generative models have been applied in many fields and can be evaluated with many methods. In the evaluation of generative models, the proper evaluation metric varies with the application field. Therefore, the evaluation of generative adversarial networks is inherently challenging. In this study, conditional deep convolutional generative adversarial networks were applied in mechanical fault diagnosis and then evaluated. We proposed three evaluation metrics of conditional deep convolutional generative adversarial networks: Jensen-Shannon divergence, kernel maximum mean discrepancy, and the 1-nearest neighbor classifier which were used to distinguish generated samples from real samples, test mode collapsing and detect overfitting based on the dataset of Electronic Engineering Laboratory of Case Western Reserve University and the planetary gearbox dataset measured in the laboratory. The Jensen-Shannon divergence could not well distinguish generated samples from real samples. However, the two metrics (maximum mean discrepancy and 1-nearest neighbor classifier) well-distinguished generated samples from real samples, thus verifying the applicability of conditional deep convolutional generative adversarial networks in the field of mechanical diagnosis.


2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Bob Stienen ◽  
Rob Verheyen

We explore the use of autoregressive flows, a type of generative model with tractable likelihood, as a means of efficient generation of physical particle collider events. The usual maximum likelihood loss function is supplemented by an event weight, allowing for inference from event samples with variable, and even negative event weights. To illustrate the efficacy of the model, we perform experiments with leading-order top pair production events at an electron collider with importance sampling weights, and with next-to-leading-order top pair production events at the LHC that involve negative weights.


2020 ◽  
Vol 3 (2) ◽  
Author(s):  
Anja Butter ◽  
Tilman Plehn ◽  
Ramon Winterhalder

Subtracting event samples is a common task in LHC simulation and analysis, and standard solutions tend to be inefficient. We employ generative adversarial networks to produce new event samples with a phase space distribution corresponding to added or subtracted input samples. We first illustrate for a toy example how such a network beats the statistical limitations of the training data. We then show how such a network can be used to subtract background events or to include non-local collinear subtraction events at the level of unweighted 4-vector events.


2020 ◽  
Vol 10 (18) ◽  
pp. 6405
Author(s):  
Zhaokun Zhou ◽  
Yuanhong Zhong ◽  
Xiaoming Liu ◽  
Qiang Li ◽  
Shu Han

Generative adversarial networks (GANs) have a revolutionary influence on sample generation. Maximum mean discrepancy GANs (MMD-GANs) own competitive performance when compared with other GANs. However, the loss function of MMD-GANs is an empirical estimate of maximum mean discrepancy (MMD) and not precise in measuring the distance between sample distributions, which inhibits MMD-GANs training. We propose an efficient divide-and-conquer model, called DC-MMD-GANs, which constrains the loss function of MMD to tight bound on the deviation between empirical estimate and expected value of MMD and accelerates the training process. DC-MMD-GANs contain a division step and conquer step. In the division step, we learn the embedding of training images based on auto-encoder, and partition the training images into adaptive subsets through k-means clustering based on the embedding. In the conquer step, sub-models are fed with subsets separately and trained synchronously. The loss function values of all sub-models are integrated to compute a new weight-sum loss function. The new loss function with tight deviation bound provides more precise gradients for improving performance. Experimental results show that with a fixed number of iterations, DC-MMD-GANs can converge faster, and achieve better performance compared with the standard MMD-GANs on celebA and CIFAR-10 datasets.


Sign in / Sign up

Export Citation Format

Share Document