scholarly journals Segmentation of EM showers for neutrino experiments with deep graph neural networks

2021 ◽  
Vol 16 (12) ◽  
pp. P12035
Author(s):  
V. Belavin ◽  
E. Trofimova ◽  
A. Ustyuzhanin

Abstract We introduce a first-ever algorithm for the reconstruction of multiple showers from the data collected with electromagnetic (EM) sampling calorimeters. Such detectors are widely used in High Energy Physics to measure the energy and kinematics of in-going particles. In this work, we consider the case when many electrons pass through an Emulsion Cloud Chamber (ECC) brick, initiating electron-induced electromagnetic showers, which can be the case with long exposure times or large input particle flux. For example, SHiP experiment is planning to use emulsion detectors for dark matter search and neutrino physics investigation. The expected full flux of SHiP experiment is about 1020 particles over five years. To reduce the cost of the experiment associated with the replacement of the ECC brick and off-line data taking (emulsion scanning), it is decided to increase exposure time. Thus, we expect to observe a lot of overlapping showers, which turn EM showers reconstruction into a challenging point cloud segmentation problem. Our reconstruction pipeline consists of a Graph Neural Network that predicts an adjacency matrix and a clustering algorithm. We propose a new layer type (EmulsionConv) that takes into account geometrical properties of shower development in ECC brick. For the clustering of overlapping showers, we use a modified hierarchical density-based clustering algorithm. Our method does not use any prior information about the incoming particles and identifies up to 87% of electromagnetic showers in emulsion detectors. The achieved energy resolution over 16,577 showers is σE/E = (0.095 ± 0.005) + (0.134 ± 0.011)/√(E). The main test bench for the algorithm for reconstructing electromagnetic showers is going to be SND@LHC.

2019 ◽  
Vol 214 ◽  
pp. 05026
Author(s):  
Jiaheng Zou ◽  
Tao Lin ◽  
Weidong Li ◽  
Xingtao Huang ◽  
Ziyan Deng ◽  
...  

SNiPER is a general purpose offline software framework for high energy physics experiment. It provides some features that are attractive to neutrino experiments, such as the event buffer. More than one events are available in the buffer according to a customizable time window, so that it is easy for users to apply events correlation analysis. We also implemented the MT-SNiPER to support multithreading computing based on Intel TBB. In MT-SNiPER, the event loop is split into pieces, and each piece is dispatched to a task. The global buffer, an extension and enhancement to the event buffer, is implemented for MT-SNiPER. The global buffer is available by all threads. It keeps all the events being processed in memory. When there is an available task, a subset of its events is dispatched to that task. There can be overlaps between the subsets in different tasks due to the time window. However, it is ensured that each event is processed only once. In the task side, the subsets of events are locally managed by a normal event buffer. So the global buffer can be transparent to most user algorithms. Within the global buffer, the multithreading computing of MT-SNiPER becomes more practicable.


2020 ◽  
Vol 3 ◽  
Author(s):  
Marco Rovere ◽  
Ziheng Chen ◽  
Antonio Di Pilato ◽  
Felice Pantaleo ◽  
Chris Seez

One of the challenges of high granularity calorimeters, such as that to be built to cover the endcap region in the CMS Phase-2 Upgrade for HL-LHC, is that the large number of channels causes a surge in the computing load when clustering numerous digitized energy deposits (hits) in the reconstruction stage. In this article, we propose a fast and fully parallelizable density-based clustering algorithm, optimized for high-occupancy scenarios, where the number of clusters is much larger than the average number of hits in a cluster. The algorithm uses a grid spatial index for fast querying of neighbors and its timing scales linearly with the number of hits within the range considered. We also show a comparison of the performance on CPU and GPU implementations, demonstrating the power of algorithmic parallelization in the coming era of heterogeneous computing in high-energy physics.


2021 ◽  
Vol 2021 (3) ◽  
Author(s):  
Konstantin T. Matchev ◽  
Prasanth Shyamsundar

Abstract We provide a prescription called ThickBrick to train optimal machine-learning-based event selectors and categorizers that maximize the statistical significance of a potential signal excess in high energy physics (HEP) experiments, as quantified by any of six different performance measures. For analyses where the signal search is performed in the distribution of some event variables, our prescription ensures that only the information complementary to those event variables is used in event selection and categorization. This eliminates a major misalignment with the physics goals of the analysis (maximizing the significance of an excess) that exists in the training of typical ML-based event selectors and categorizers. In addition, this decorrelation of event selectors from the relevant event variables prevents the background distribution from becoming peaked in the signal region as a result of event selection, thereby ameliorating the challenges imposed on signal searches by systematic uncertainties. Our event selectors (categorizers) use the output of machine-learning-based classifiers as input and apply optimal selection cutoffs (categorization thresholds) that are functions of the event variables being analyzed, as opposed to flat cutoffs (thresholds). These optimal cutoffs and thresholds are learned iteratively, using a novel approach with connections to Lloyd’s k-means clustering algorithm. We provide a public, Python implementation of our prescription, also called ThickBrick, along with usage examples.


2020 ◽  
Vol 245 ◽  
pp. 11006 ◽  
Author(s):  
Mario Lassnig ◽  
Martin Barisits ◽  
Paul J Laycock ◽  
Cédric Serfon ◽  
Eric W Vaandering ◽  
...  

For many scientific projects, data management is an increasingly complicated challenge. The number of data-intensive instruments generating unprecedented volumes of data is growing and their accompanying workflows are becoming more complex. Their storage and computing resources are heterogeneous and are distributed at numerous geographical locations belonging to different administrative domains and organisations. These locations do not necessarily coincide with the places where data is produced nor where data is stored, analysed by researchers, or archived for safe long-term storage. To fulfil these needs, the data management system Rucio has been developed to allow the high-energy physics experiment ATLAS at LHC to manage its large volumes of data in an efficient and scalable way. But ATLAS is not alone, and several diverse scientific projects have started evaluating, adopting, and adapting the Rucio system for their own needs. As the Rucio community has grown, many improvements have been introduced, customisations have been added, and many bugs have been fixed. Additionally, new dataflows have been investigated and operational experiences have been documented. In this article we collect and compare the common successes, pitfalls, and oddities that arose in the evaluation efforts of multiple diverse experiments, and compare them with the ATLAS experience. This includes the high-energy physics experiments Belle II and CMS, the neutrino experiment DUNE, the scattering radar experiment EISCAT3D, the gravitational wave observatories LIGO and VIRGO, the SKA radio telescope, and the dark matter search experiment XENON.


In this chapter, some applications of micropattern detectors are described. Their main application is tracking of charged particles in high-energy physics. However, currently there are a lot of research and developments going on, which may open new exciting fields of applications, for example in dark matter search, medical applications, homeland security, etc. The authors start with the traditional applications, which are in high-energy physics and astrophysics. Later, the focus shifts to promising developments oriented towards new applications. These innovative applications include: imaging of charged particles and energetic photons with unprecedented high 2-D spatial resolution (e.g. in mammography), time projection chambers capable operating in a high flux of particles (e.g. ALICE upgraded TPC), and visualization of ultraviolet and visible photons. Finally, a short description of the international collaboration RD51 established at CERN is given in order to promote the development of micropattern detectors and their applications.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Amlan Datta ◽  
Biplob Barman ◽  
Stephen Magill ◽  
Shariar Motakef

Abstract Wavelength shifting photon detection systems (PDS) are the critical functioning components in noble liquid detectors used for high energy physics (HEP) experiments and dark matter search. The vacuum ultraviolet (VUV) scintillation light emitted by these Liquid argon (LAr) and liquid Xenon (LXe) detectors are shifted to higher wavelengths resulting in its efficient detection using the state-of-the-art photodetectors such as silicon photomultipliers (SiPM). The currently used organic wavelength shifting materials [such as 1,1,4,4 Tetraphenyl Butadiene (TPB)] have several disadvantages and are unreliable for longterm use. In this study, we demonstrate the application of the inorganic perovskite cesium lead bromide (CsPbBr3) quantum dots (QDs) as highly efficient wavelength shifters. The absolute photoluminescence quantum yield of the PDS fabricated using these QDs exceeds 70%. CsPbBr3-based PDS demonstrated an enhancement in the SiPM signal enhancement by up to 3 times when compared to a 3 µm-thick TPB-based PDS. The emission spectrum from the QDs was optimized to match the highest quantum efficiency region of the SiPMs. In addition, we have demonstrated the deposition of the QD-based wavelength shifting material on a large area PDS substrate using low capital cost and widely scalable solution-based techniques providing a pathway appropriate for meter-scale PDS fabrication and widespread use for other wavelength shifting applications.


2014 ◽  
Vol 47 (3) ◽  
pp. 1087-1096 ◽  
Author(s):  
Rocco Caliandro ◽  
Danilo Benny Belviso

RootProfis a multi-purpose program which implements multivariate analysis of unidimensional profiles. Series of measurements, performed on related samples or on the same sample by varying some external stimulus, are analysed to find trends in data, classify them and extract quantitative information. Qualitative analysis is performed by using principal component analysis or correlation analysis. In both cases the data set is projected in a latent variable space, where a clustering algorithm classifies data points. Group separation is quantified by statistical tools. Quantitative phase analysis of a series of profiles is implemented by whole-profile fitting or by an unfolding procedure, and relies on a variety of pre-processing methods. Supervised quantitative analysis can be applied, provideda prioriinformation on some samples is provided.RootProfcan be applied to measurements from different techniques, which can be combined by means of a covariance analysis. A specific analysis for powder diffraction data allows estimation of the average size of crystal domains.RootProfborrows its graphics and data analysis capabilities from the Root framework, developed for high-energy physics experiments.


2004 ◽  
Vol 19 (23) ◽  
pp. 3807-3818 ◽  
Author(s):  
ROBERT FOOT

Mirror matter-type dark matter is one dark matter candidate which is particularly well motivated from high energy physics. The theoretical motivation and experimental evidence are pedagogically reviewed, with emphasis on the implications of recent orthopositronium experiments, the DAMA/NaI dark matter search, anomalous meteorite events etc.


Sign in / Sign up

Export Citation Format

Share Document