scholarly journals OASIS: Optimal Analysis-Specific Importance Sampling for event generation

2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Konstantin Matchev ◽  
Prasanth Shyamsundar

We propose a technique called Optimal Analysis-Specific Importance Sampling (OASIS) to reduce the number of simulated events required for a high-energy experimental analysis to reach a target sensitivity. We provide recipes to obtain the optimal sampling distributions which preferentially focus the event generation on the regions of phase space with high utility to the experimental analyses. OASIS leads to a conservation of resources at all stages of the Monte Carlo pipeline, including full-detector simulation, and is complementary to approaches which seek to speed-up the simulation pipeline.

Author(s):  
Jimmy Ming-Tai Wu ◽  
Qian Teng ◽  
Shahab Tayeb ◽  
Jerry Chun-Wei Lin

AbstractThe high average-utility itemset mining (HAUIM) was established to provide a fair measure instead of genetic high-utility itemset mining (HUIM) for revealing the satisfied and interesting patterns. In practical applications, the database is dynamically changed when insertion/deletion operations are performed on databases. Several works were designed to handle the insertion process but fewer studies focused on processing the deletion process for knowledge maintenance. In this paper, we then develop a PRE-HAUI-DEL algorithm that utilizes the pre-large concept on HAUIM for handling transaction deletion in the dynamic databases. The pre-large concept is served as the buffer on HAUIM that reduces the number of database scans while the database is updated particularly in transaction deletion. Two upper-bound values are also established here to reduce the unpromising candidates early which can speed up the computational cost. From the experimental results, the designed PRE-HAUI-DEL algorithm is well performed compared to the Apriori-like model in terms of runtime, memory, and scalability in dynamic databases.


2021 ◽  
Vol 251 ◽  
pp. 03055
Author(s):  
John Blue ◽  
Braden Kronheim ◽  
Michelle Kuchera ◽  
Raghuram Ramanujan

Detector simulation in high energy physics experiments is a key yet computationally expensive step in the event simulation process. There has been much recent interest in using deep generative models as a faster alternative to the full Monte Carlo simulation process in situations in which the utmost accuracy is not necessary. In this work we investigate the use of conditional Wasserstein Generative Adversarial Networks to simulate both hadronization and the detector response to jets. Our model takes the 4-momenta of jets formed from partons post-showering and pre-hadronization as inputs and predicts the 4-momenta of the corresponding reconstructed jet. Our model is trained on fully simulated tt events using the publicly available GEANT-based simulation of the CMS Collaboration. We demonstrate that the model produces accurate conditional reconstructed jet transverse momentum (pT) distributions over a wide range of pT for the input parton jet. Our model takes only a fraction of the time necessary for conventional detector simulation methods, running on a CPU in less than a millisecond per event.


2019 ◽  
Vol 22 ◽  
pp. 88
Author(s):  
K. Balasi ◽  
C. Markou ◽  
K. Tzamarioudaki ◽  
P. Rapidis ◽  
E. Drakopoulou ◽  
...  

The response of an underwater neutrino detector is discussed for investigating its performance to the detection of muons and high energy neutrinos. The afformentioned telescope consists of an autonomous battery operated detector string to a central 4-floor tower. In this aim, we utilised a fast detector simulation program, SIRENE, to simulate the hits from Cherenkov photons at ultra high energies (as high as 1020 eV). In order to optimize the detector, analytical studies for different configurations and characteristics of the photo-multiplier tubes inside the optical modules of the telescope was also examined.


2019 ◽  
Vol 214 ◽  
pp. 02019
Author(s):  
V. Daniel Elvira

Detector simulation has become fundamental to the success of modern high-energy physics (HEP) experiments. For example, the Geant4-based simulation applications developed by the ATLAS and CMS experiments played a major role for them to produce physics measurements of unprecedented quality and precision with faster turnaround, from data taking to journal submission, than any previous hadron collider experiment. The material presented here contains highlights of a recent review on the impact of detector simulation in particle physics collider experiments published in Ref. [1]. It includes examples of applications to detector design and optimization, software development and testing of computing infrastructure, and modeling of physics objects and their kinematics. The cost and economic impact of simulation in the CMS experiment is also presented. A discussion on future detector simulation needs, challenges and potential solutions to address them is included at the end.


2015 ◽  
Vol 52 (02) ◽  
pp. 519-537 ◽  
Author(s):  
Jere Koskela ◽  
Paul Jenkins ◽  
Dario Spanò

Full likelihood inference under Kingman's coalescent is a computationally challenging problem to which importance sampling (IS) and the product of approximate conditionals (PAC) methods have been applied successfully. Both methods can be expressed in terms of families of intractable conditional sampling distributions (CSDs), and rely on principled approximations for accurate inference. Recently, more general Λ- and Ξ-coalescents have been observed to provide better modelling fits to some genetic data sets. We derive families of approximate CSDs for finite sites Λ- and Ξ-coalescents, and use them to obtain ‘approximately optimal’ IS and PAC algorithms for Λ-coalescents, yielding substantial gains in efficiency over existing methods.


2019 ◽  
Vol 810 ◽  
pp. 101-106 ◽  
Author(s):  
Petr Haušild ◽  
Jaroslav Čech ◽  
Veronika Kadlecová ◽  
Miroslav Karlík ◽  
Filip Průša ◽  
...  

In this paper, recently developed ternary FeAl20Si20 (wt.%) alloy with promising high-temperature oxidation and wear resistance was prepared by mechanical alloying in a high-energy ball mill. The possibility to speed-up the mechanical alloying process by replacing aluminium (and partly silicon) elemental powder by the pre-alloyed powder (AlSi30) with relatively fine dispersion of Si in the Al-Si eutectic was examined. The microstructure, phase composition and mechanical properties after various time of mechanical alloying were characterized. The effect of using the pre-alloyed powders on kinetics of mechanical alloying is compared with the results obtained on batches prepared from elemental powders.


2019 ◽  
Vol 34 (18) ◽  
pp. 1950093
Author(s):  
Guang Yang ◽  
Bingfang Yang ◽  
Biaofeng Hou ◽  
Hengheng Bi

In the framework of the littlest Higgs Model with T-parity (LHT), we investigate the single production of vector-like top partner [Formula: see text] decaying to [Formula: see text] in the leptonic channel at the high energy [Formula: see text] collision. We utilize the polarized electron beam and photon beam to enhance the signal and propose a search strategy by performing a detector simulation. On the basis of the current limits from the precision electroweak data and Higgs data, we find that the top partner mass can be excluded up to 1350 (1380) GeV with integrated luminosity of 1000 fb[Formula: see text] and 1400 (1470) GeV with integrated luminosity of 3000 fb[Formula: see text] for the [Formula: see text] TeV (2.4 TeV) at the [Formula: see text] level. If the center-of-mass energy can be improved to 3.0 TeV, the limits on the top partner mass will reach 1450 (1550) GeV with integrated luminosities of 1000 (3000) fb[Formula: see text].


Author(s):  
Philipp Roloff ◽  
Ulrike Schnoor ◽  
Rosa Simoniello ◽  
Boruo Xu

AbstractThe Compact Linear Collider (CLIC) is a future electron–positron collider that will allow measurements of the trilinear Higgs self-coupling in double Higgs boson events produced at its high-energy stages with collision energies from $$\sqrt{s}$$ s  = 1.4 to 3 TeV. The sensitivity to the Higgs self-coupling is driven by the measurements of the cross section and the invariant mass distribution of the Higgs-boson pair in the W-boson fusion process, $$\text {e}^{+}\text {e}^{-}\rightarrow {\text {H}\text {H}\nu \bar{\nu }}$$ e + e - → HH ν ν ¯ . It is enhanced by including the cross-section measurement of ZHH production at 1.4 TeV. The expected sensitivity of CLIC for Higgs pair production through W-boson fusion is studied for the decay channels $$\mathrm{b}\bar{\mathrm{b}}\mathrm{b}\bar{\mathrm{b}}$$ b b ¯ b b ¯   and $$\mathrm{b}\bar{\mathrm{b}}\mathrm{W}\mathrm{W}^{*}$$ b b ¯ W W ∗   using full detector simulation including all relevant backgrounds at $$\sqrt{s}$$ s = 1.4 TeV with an integrated luminosity of $$\mathcal {L}$$ L  = 2.5 ab$$^{-1}$$ - 1 and at $$\sqrt{s}$$ s = 3 TeV with $$\mathcal {L}$$ L  = 5 ab$$^{-1}$$ - 1 . Combining $$\text {e}^{+}\text {e}^{-}\rightarrow {\text {H}\text {H}\nu \bar{\nu }}$$ e + e - → HH ν ν ¯ and ZHH  cross-section measurements at 1.4 TeV with differential measurements in $$\text {e}^{+}\text {e}^{-}\rightarrow {\text {H}\text {H}\nu \bar{\nu }}$$ e + e - → HH ν ν ¯ events at 3 TeV, CLIC will be able to measure the trilinear Higgs self-coupling with a relative uncertainty of $$-8\%$$ - 8 % and $$ +11\%$$ + 11 % at 68% C.L., assuming the Standard Model. In addition, prospects for simultaneous constraints on the trilinear Higgs self-coupling and the Higgs-gauge coupling HHWW are derived based on the $${\text {H}\text {H}\nu \bar{\nu }}$$ HH ν ν ¯ measurement.


2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Jerry Chun-Wei Lin ◽  
Wensheng Gan ◽  
Tzung-Pei Hong ◽  
Binbin Zhang

Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.


Sign in / Sign up

Export Citation Format

Share Document