scholarly journals The HepMC C++ Monte Carlo event record for High Energy Physics

2001 ◽  
Vol 134 (1) ◽  
pp. 41-46 ◽  
Author(s):  
Matt Dobbs ◽  
Jørgen Beck Hansen
2015 ◽  
Vol 2015 ◽  
pp. 1-7 ◽  
Author(s):  
S. V. Chekanov

A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Florin Pop

Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.


Author(s):  
Daniele Andreotti ◽  
Armando Fella ◽  
Eleonora Luppi

The BaBar experiment uses data since 1999 in examining the violation of charge and parity (CP) symmetry in the field of high energy physics. This event simulation experiment is a compute intensive task due to the complexity of the Monte-Carlo simulation implemented on the GEANT engine. Data needed as input for the simulation (stored in the ROOT format), are classified into two categories: conditions data for describing the detector status when data are recorded, and background triggers data for noise signal necessary to obtain a realistic simulation. In this chapter, the grid approach is applied to the BaBar production framework using the INFN-GRID network.


Author(s):  
Manuel Alejandro Segura ◽  
Julian Salamanca ◽  
Edwin Munevar

Specialized documentation envisioned from a pedagogical bases to train scientifically and technologically teachers and researchers, who initiate themselves in the analysis of high energy physics (HEP) experiments, is scarce. The lack of this material makes that young scientists' learning process be prolonged in time, raising costs in experimental research. In this paper we present the Monte Carlo technique applied to simulate the threshold energy for producing final-state particles of a specific two-body process (A + B → C + D), as pedagogical environment to face both computationally and conceptually an experimental analysis. The active/interactive learning-teaching formative process presented here is expected to be an educational resource for reducing young scientists' learning curve and saving time and costs in HEP scientific research.


2005 ◽  
Vol 20 (16) ◽  
pp. 3880-3882 ◽  
Author(s):  
DANIEL WICKE

The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.


Sign in / Sign up

Export Citation Format

Share Document