scholarly journals Editorial: Heterogeneous Computing for AI and Big Data in High Energy Physics

2021 ◽  
Vol 4 ◽  
Author(s):  
Daniele D’Agostino ◽  
Daniele Cesini
Author(s):  
Valentina Avati ◽  
Milosz Blaszkiewicz ◽  
Enrico Bocchi ◽  
Luca Canali ◽  
Diogo Castro ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Florin Pop

Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.


2021 ◽  
Vol 4 ◽  
Author(s):  
Sezen Sekmen ◽  
Gian Michele Innocenti ◽  
Bo Jayatilaka

This editorial summarizes the contributions to the Frontiers Research topic “Innovative Analysis Ecosystems for HEP Data”, established under the Big Data and AI in High Energy Physics section and appearing under the Frontiers in Big Data and Frontiers in Artificial Intelligence journals.


2020 ◽  
Vol 3 ◽  
Author(s):  
Marco Rovere ◽  
Ziheng Chen ◽  
Antonio Di Pilato ◽  
Felice Pantaleo ◽  
Chris Seez

One of the challenges of high granularity calorimeters, such as that to be built to cover the endcap region in the CMS Phase-2 Upgrade for HL-LHC, is that the large number of channels causes a surge in the computing load when clustering numerous digitized energy deposits (hits) in the reconstruction stage. In this article, we propose a fast and fully parallelizable density-based clustering algorithm, optimized for high-occupancy scenarios, where the number of clusters is much larger than the average number of hits in a cluster. The algorithm uses a grid spatial index for fast querying of neighbors and its timing scales linearly with the number of hits within the range considered. We also show a comparison of the performance on CPU and GPU implementations, demonstrating the power of algorithmic parallelization in the coming era of heterogeneous computing in high-energy physics.


2018 ◽  
Vol 5 (1) ◽  
pp. 205395171876883 ◽  
Author(s):  
Andrew Bartlett ◽  
Jamie Lewis ◽  
Luis Reyes-Galindo ◽  
Neil Stephens

This paper argues that analyses of the ways in which Big Data has been enacted in other academic disciplines can provide us with concepts that will help understand the application of Big Data to social questions. We use examples drawn from our Science and Technology Studies (STS) analyses of -omic biology and high energy physics to demonstrate the utility of three theoretical concepts: (i) primary and secondary inscriptions, (ii) crafted and found data, and (iii) the locus of legitimate interpretation. These help us to show how the histories, organisational forms, and power dynamics of a field lead to different enactments of big data. The paper suggests that these concepts can be used to help us to understand the ways in which Big Data is being enacted in the domain of the social sciences, and to outline in general terms the ways in which this enactment might be different to that which we have observed in the ‘hard’ sciences. We contend that the locus of legitimate interpretation of Big Data biology and physics is tightly delineated, found within the disciplinary institutions and cultures of these disciplines. We suggest that when using Big Data to make knowledge claims about ‘the social’ the locus of legitimate interpretation is more diffuse, with knowledge claims that are treated as being credible made from other disciplines, or even by those outside academia entirely.


2021 ◽  
Author(s):  
Luca Giommi ◽  
Valentin Kuznetsov ◽  
Daniele Bonacorsi ◽  
Daniele Spiga

Sign in / Sign up

Export Citation Format

Share Document