scholarly journals Track Finding with Deep Neural Networks

2019 ◽  
Vol 20 (4) ◽  
Author(s):  
Marcin Kucharczyk ◽  
Marcin Wolter

High Energy Physics experiments require fast and efficient methods toreconstruct the tracks of charged particles. Commonly used algorithms aresequential and the CPU required increases rapidly with a number of tracks.Neural networks can speed up the process due to their capability to modelcomplex non-linear data dependencies and finding all tracks in parallel.In this paper we describe the application of the Deep Neural Networkto the reconstruction of straight tracks in a toy two-dimensional model. It isplanned to apply this method to the experimental data taken by the MUonEexperiment at CERN.

2020 ◽  
Vol 245 ◽  
pp. 05001
Author(s):  
Rosen Matev

High energy physics experiments traditionally have large software codebases primarily written in C++ and the LHCb physics software stack is no exception. Compiling from scratch can easily take 5 hours or more for the full stack even on an 8-core VM. In a development workflow, incremental builds often do not significantly speed up compilation because even just a change of the modification time of a widely used header leads to many compiler and linker invokations. Using powerful shared servers is not practical as users have no control and maintenance is an issue. Even though support for building partial checkouts on top of published project versions exists, by far the most practical development workflow involves full project checkouts because of off-the-shelf tool support (git, intellisense, etc.) This paper details a deployment of distcc, a distributed compilation server, on opportunistic resources such as development machines. The best performance operation mode is achieved when preprocessing remotely and profiting from the shared CernVM File System. A 10 (30) fold speedup of elapsed (real) time is achieved when compiling Gaudi, the base of the LHCb stack, when comparing local compilation on a 4 core VM to remote compilation on 80 cores, where the bottleneck becomes non-distributed work such as linking. Compilation results are cached locally using ccache, allowing for even faster rebuilding. A recent distributed memcached-based shared cache is tested as well as a more modern distributed system by Mozilla, sccache, backed by S3 storage. These allow for global sharing of compilation work, which can speed up both central CI builds and local development builds. Finally, we explore remote caching and execution services based on Bazel, and how they apply to Gaudi-based software for distributing not only compilation but also linking and even testing.


2020 ◽  
Author(s):  
Mariana Petris ◽  
Daniel Bartos ◽  
Mihai Petrovici ◽  
Laura Radulescu ◽  
Victor Simion ◽  
...  

Author(s):  
Preeti Kumari ◽  
◽  
Kavita Lalwani ◽  
Ranjit Dalal ◽  
Ashutosh Bhardwaj ◽  
...  

2014 ◽  
Author(s):  
John Cumalat ◽  
Kevin Stenson ◽  
Stephen Wagner

2005 ◽  
Vol 20 (16) ◽  
pp. 3874-3876 ◽  
Author(s):  
B. Abbott ◽  
P. Baringer ◽  
T. Bolton ◽  
Z. Greenwood ◽  
E. Gregores ◽  
...  

The DØ experiment at Fermilab's Tevatron will record several petabytes of data over the next five years in pursuing the goals of understanding nature and searching for the origin of mass. Computing resources required to analyze these data far exceed capabilities of any one institution. Moreover, the widely scattered geographical distribution of DØ collaborators poses further serious difficulties for optimal use of human and computing resources. These difficulties will exacerbate in future high energy physics experiments, like the LHC. The computing grid has long been recognized as a solution to these problems. This technology is being made a more immediate reality to end users in DØ by developing a grid in the DØ Southern Analysis Region (DØSAR), DØSAR-Grid, using all available resources within it and a home-grown local task manager, McFarm. We will present the architecture in which the DØSAR-Grid is implemented, the use of technology and the functionality of the grid, and the experience from operating the grid in simulation, reprocessing and data analyses for a currently running HEP experiment.


2016 ◽  
Vol 93 (9) ◽  
Author(s):  
Pierre Baldi ◽  
Kevin Bauer ◽  
Clara Eng ◽  
Peter Sadowski ◽  
Daniel Whiteson

2017 ◽  
Vol 12 (12) ◽  
pp. P12004-P12004 ◽  
Author(s):  
F. Arteche ◽  
C. Rivetta ◽  
M. Iglesias ◽  
I. Echeverria ◽  
A. Pradas ◽  
...  

2016 ◽  
Vol 69 (6) ◽  
pp. 1130-1134 ◽  
Author(s):  
M. J. Kim ◽  
H. Park ◽  
H. J. Kim

Sign in / Sign up

Export Citation Format

Share Document