scholarly journals New techniques for pile-up simulation in ATLAS

2019 ◽  
Vol 214 ◽  
pp. 02044
Author(s):  
Tadej Novak

The high-luminosity data produced by the LHC leads to many proton-proton interactions per beam crossing in ATLAS, known as pile-up. In order to understand the ATLAS data and extract physics results it is important to model these effects accurately in the simulation. As the pile-up rate continues to grow towards an eventual rate of 200 for the HL-LHC, this puts increasing demands on the computing resources required for the simulation and the current approach of simulating the pile-up interactions along with the hard-scatter for each Monte Carlo production is no longer feasible. The new ATLAS “overlay” approach to pile-up simulation is presented. Here a pre-combined set of minimum bias interactions, either from simulation or from real data, is created once and a single event drawn from this set is overlaid with the hard-scatter event being simulated. This leads to significant improvements in CPU time. This contribution will discuss the technical aspects of the implementation in the ATLAS simulation and production infrastructure and compare the performance, both in terms of computing and physics, to the previous approach.

2019 ◽  
Vol 97 (5) ◽  
pp. 498-508 ◽  
Author(s):  
L. Medina ◽  
R. Tomás ◽  
G. Arduini ◽  
M. Napsuciale

The High-Luminosity Large Hadron Collider (HL-LHC) experiments will operate at unprecedented levels of event pile-up from proton–proton collisions at 14 TeV centre-of-mass energy. In this paper, we study the performance of the baseline and a series of alternative scenarios in terms of the delivered integrated luminosity and its quality (pile-up density). A new figure-of-merit is introduced, the effective pile-up density, a concept that reflects the expected detector efficiency in the reconstruction of event vertices for a given operational scenario, acting as a link between the machine and experimental sides. Alternative scenarios have been proposed either to improve the baseline performance or to provide operational schemes in the case of particular limitations. Simulations of the evolution of their optimum fills with the latest set of parameters of the HL-LHC are performed with β*-levelling, and the results are discussed in terms of both the integrated luminosity and the effective pile-up density. The crab kissing scheme, a proposed scenario for pile-up density control, is re-evaluated under this new perspective with updated beam and optics parameters. Estimates on the expected integrated luminosity due to the impact of crab cavity noise, full crab crossing, and reduced cross section for burn-off, are also presented.


2018 ◽  
Vol 35 (10) ◽  
pp. 2061-2078 ◽  
Author(s):  
Sid-Ahmed Boukabara ◽  
Kayo Ide ◽  
Yan Zhou ◽  
Narges Shahroudi ◽  
Ross N. Hoffman ◽  
...  

AbstractObserving system simulation experiments (OSSEs) are used to simulate and assess the impacts of new observing systems planned for the future or the impacts of adopting new techniques for exploiting data or for forecasting. This study focuses on the impacts of satellite data on global numerical weather prediction (NWP) systems. Since OSSEs are based on simulations of nature and observations, reliable results require that the OSSE system be validated. This validation involves cycles of assessment and calibration of the individual system components, as well as the complete system, with the end goal of reproducing the behavior of real-data observing system experiments (OSEs). This study investigates the accuracy of the calibration of an OSSE system—here, the Community Global OSSE Package (CGOP) system—before any explicit tuning has been performed by performing an intercomparison of the OSSE summary assessment metrics (SAMs) with those obtained from parallel real-data OSEs. The main conclusion reached in this study is that, based on the SAMs, the CGOP is able to reproduce aspects of the analysis and forecast performance of parallel OSEs despite the simplifications employed in the OSSEs. This conclusion holds even when the SAMs are stratified by various subsets (the tropics only, temperature only, etc.).


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Georges Aad ◽  
Anne-Sophie Berthold ◽  
Thomas Calvet ◽  
Nemer Chiedde ◽  
Etienne Marie Fortin ◽  
...  

AbstractThe ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC.


Several different methods of using multi-wavelength anomalous scattering data are described and illustrated by application to the solution of the known protein structure, core streptavidin, for which data at three wavelengths were available. Three of the methods depend on the calculation of Patterson-like functions for which the Fourier coefficients involve combinations of the anomalous structure amplitudes from either two or three wavelengths. Each of these maps should show either vectors between anomalous scatterers or between anomalous scatterers and non-anomalous scatterers. While they do so when ideal data are used, with real data they give little information; it is concluded that these methods are far too sensitive to errors in the data and to the scaling of the data-sets to each other. Another Patterson-type function, the P s function, which uses only single-wavelength data can be made more effective by combining the information from several wavelengths. Two analytical methods are described, called AGREE and ROTATE, both of which were very successfully applied to the core streptavidin data. They are both made more effective by preprocessing the data with a procedure called REVISE which brings a measure of mutual consistency to the data from different wavelengths. The best phases obtained from AGREE lead to a map with a conventional correlation coefficient of 0.549 and this should readily be interpreted in terms of a structural model.


2009 ◽  
Vol os16 (4) ◽  
pp. 157-163 ◽  
Author(s):  
Vernon P Holt ◽  
Russ Ladwa

Mentoring and coaching, as they are currently practised, are relatively new techniques for working with people. The roots of the current approach can be traced back to the psychotherapist Carl Rogers, who developed a new ‘person-centred approach’ to counselling and quickly realised that this approach was also appropriate for many types of relationship, from education to family life. Rogers’ thinking was deeply influenced by dialogues with his friend, the existentialist philosopher Martin Buber. Developments in psychology building upon this new person-centred approach include transactional analysis (TA) and neurolingusitic programming (NLP). More recently, solutions-focused approaches have been used and a related approach to leadership in the business environment—strengths-based leadership—has been developed. In recent years, developments in neuroscience have greatly increased understanding not only of how the brain is ‘wired up’ but also of how it is specifically wired to function as a social organ. The increased understanding in these areas can be considered in the context of emotional and social intelligence. These concepts and knowledge have been drawn together into a more structured discipline with the development of the approach known as positive psychology, the focus of which is on the strengths and virtues that contribute to good performance and authentic happiness.


2019 ◽  
Author(s):  
Mike W.-L. Cheung

Conventional meta-analytic procedures assume that effect sizes are independent. When effect sizes are non-independent, conclusions based on these conventional models can be misleading or even wrong. Traditional approaches, such as averaging the effect sizes and selecting one effect size per study, are usually used to remove the dependence of the effect sizes. These ad-hoc approaches, however, may lead to missed opportunities to utilize all available data to address the relevant research questions. Both multivariate meta-analysis and three-level meta-analysis have been proposed to handle non-independent effect sizes. This paper gives a brief introduction to these new techniques for applied researchers. The first objective is to highlight the benefits of using these methods to address non-independent effect sizes. The second objective is to illustrate how to apply these techniques with real data in R and Mplus. Researchers may modify the sample R and Mplus code to fit their data.


2021 ◽  
Vol 251 ◽  
pp. 02042
Author(s):  
Maria Girone ◽  
David Southwick ◽  
Viktor Khristenko ◽  
Miguel F. Medeiros ◽  
Domenico Giordano ◽  
...  

The Large Hadron Collider (LHC) will enter a new phase beginning in 2027 with the upgrade to the High Luminosity LHC (HL-LHC). The increase in the number of simultaneous collisions coupled with a more complex structure of a single event will result in each LHC experiment collecting, storing, and processing exabytes of data per year. The amount of generated and/or collected data greatly outweighs the expected available computing resources. In this paper, we discuss effcient usage of HPC resources as a prerequisite for data-intensive science at exascale. First, we discuss the experience of porting CMS Hadron and Electromagnetic calorimeters reconstruction code to utilize Nvidia GPUs within the DEEP-EST project; second, we look at the tools and their adoption in order to perform benchmarking of a variety of resources available at HPC centers. Finally, we touch on one of the most important aspects of the future of HEP - how to handle the flow of petabytes of data to and from computing facilities, be it clouds or HPCs, for exascale data processing in a flexible, scalable and performant manner. These investigations are a key contribution to technical work within the HPC collaboration among CERN, SKA, GEANT and PRACE.


2020 ◽  
Author(s):  
Miriam K. Forbes

Goldberg’s (2006) bass-ackwards approach to elucidating the hierarchical structure of individual differences data has been used widely to improve our understanding of the relationships among constructs of varying levels of granularity. The traditional approach has been to extract a single component on the first level of the hierarchy, two components on the second level, and so on, treating the correlations between the components on adjoining levels akin to path coefficients in a hierarchical structure. This article proposes three modifications to the current approach with a particular focus on examining associations among all levels of the hierarchy: 1) identify and remove redundant components that perpetuate through multiple levels of the hierarchy; 2) (optionally) identify and remove artefactual components; and 3) plot the strongest component correlations among the remaining components to identify their hierarchical associations. Together these steps can offer a simpler and more complete picture of the underlying hierarchical structure among a set of observed variables. The rationale for each step is described, illustrated in a hypothetical example, and then applied in real data with specific methodological recommendations. The results are compared to the traditional bass-ackwards approach, and basic code is provided to apply the proposed modifications in other data.


2018 ◽  
Vol 192 ◽  
pp. 00032 ◽  
Author(s):  
Rosamaria Venditti

The High-Luminosity Large Hadron Collider (HL-LHC) is a major upgrade of the LHC, expected to deliver an integrated luminosity of up to 3000/fb over one decade. The very high instantaneous luminosity will lead to about 200 proton-proton collisions per bunch crossing (pileup) superimposed to each event of interest, therefore providing extremely challenging experimental conditions. The scientific goals of the HL-LHC physics program include precise measurement of the properties of the recently discovered standard model Higgs boson and searches for beyond the standard model physics (heavy vector bosons, SUSY, dark matter and exotic long-lived signatures, to name a few). In this contribution we will present the strategy of the CMS experiment to investigate the feasibility of such search and quantify the increase of sensitivity in the HL-LHC scenario.


Sign in / Sign up

Export Citation Format

Share Document