scholarly journals Reconstruction of Charged Particle Tracks in Realistic Detector Geometry Using a Vectorized and Parallelized Kalman Filter Algorithm

2020 ◽  
Vol 245 ◽  
pp. 02013
Author(s):  
Giuseppe Cerati ◽  
Peter Elmer ◽  
Brian Gravelle ◽  
Matti Kortelainen ◽  
Vyacheslav Krutelyov ◽  
...  

One of the most computationally challenging problems expected for the High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at the LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing the need for faster computational throughput, we have adapted Kalman-filterbased methods for highly parallel, many-core SIMD and SIMT architectures that are now prevalent in high-performance hardware. Previously we observed significant parallel speedups, with physics performance comparable to CMS standard tracking, on Intel Xeon, Intel Xeon Phi, and (to a limited extent) NVIDIA GPUs. While early tests were based on artificial events occurring inside an idealized barrel detector, we showed subsequently that our mkFit software builds tracks successfully from complex simulated events (including detector pileup) occurring inside a geometrically accurate representation of the CMS-2017 tracker. Here, we report on advances in both the computational and physics performance of mkFit, as well as progress toward integration with CMS production software. Recently we have improved the overall efficiency of the algorithm by preserving short track candidates at a relatively early stage rather than attempting to extend them over many layers. Moreover, mkFit formerly produced an excess of duplicate tracks; these are now explicitly removed in an additional processing step. We demonstrate that with these enhancements, mkFit becomes a suitable choice for the first iteration of CMS tracking, and eventually for later iterations as well. We plan to test this capability in the CMS High Level Trigger during Run 3 of the LHC, with an ultimate goal of using it in both the CMS HLT and offline reconstruction for the HL-LHC CMS tracker.

2019 ◽  
Vol 214 ◽  
pp. 01039
Author(s):  
Khalil Bouaouda ◽  
Stefan Schmitt ◽  
Driss Benchekroun

Online selection is an essential step to collect the most relevant collisions from the very large number of collisions inside the ATLAS detector at the Large Hadron Collider (LHC). The Fast TracKer (FTK) is a hardware based track finder, built to greatly improve the ATLAS trigger system capabilities for identifying interesting physics processes through track-based signatures. The FTK is reconstructing after each Level-1 trigger all tracks with pT > 1 GeV, such that the high-level trigger system gains access to track information at an early stage. FTK track reconstruction starts with a pattern recognition step. Patterns are found with hits in seven out of eight possible detector layers. Disabled detector modules, as often encountered during LHC operation, lead to efficiency losses. To recover efficiency, WildCards (WC) algorithms are implemented in the FTK system. The WC algorithm recovers inefficiency but also causes high combinatorial background and thus increased data volumes in the FTK system, possibly exceeding hardware limitations. To overcome this, a refined algorithm to select patterns is developed and investigated in this article.


2019 ◽  
Vol 214 ◽  
pp. 02002 ◽  
Author(s):  
Giuseppe Cerati ◽  
Peter Elmer ◽  
Brian Gravelle ◽  
Matti Kortelainen ◽  
Vyacheslav Krutelyov ◽  
...  

The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms at the LHC are based on Kalman filter techniques with proven excellent physics performance under a variety of conditions. Starting in 2014, we have been developing Kalman-filter-based methods for track finding and fitting adapted for many-core SIMD processors that are becoming dominant in high-performance systems. This paper summarizes the latest extensions to our software that allow it to run on the realistic CMS-2017 tracker geometry using CMSSW-generated events, including pileup. The reconstructed tracks can be validated against either the CMSSW simulation that generated the detector hits, or the CMSSW reconstruction of the tracks. In general, the code’s computational performance has continued to improve while the above capabilities were being added. We demonstrate that the present Kalman filter implementation is able to reconstruct events with comparable physics performance to CMSSW, while providing generally better computational performance. Further plans for advancing the software are discussed.


Author(s):  
Corey A. Honl ◽  
Ryan M. Rudnitzki

The following paper describes the release of the 220GL engine and APG2000/3000 Enginator™ product lines from Waukesha Engine. The major elements of the release that will be covered include the installation and calibration of the ESM® control system, the development of new capabilities to control fuel injection and its associated features, the integration of Waukesha-introduced components on the 220GL, high-level product strategy and justification, and early stage performance figures from development testing.


2020 ◽  
Vol 245 ◽  
pp. 07044
Author(s):  
Frank Berghaus ◽  
Franco Brasolin ◽  
Alessandro Di Girolamo ◽  
Marcus Ebert ◽  
Colin Roy Leavett-Brown ◽  
...  

The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level Trigger (HLT) farm. The HLT farm provides around 100,000 cores, which are critical to ATLAS during data taking. When ATLAS is not recording data, such as the long shutdowns of the LHC, this large compute resource is used to generate and process simulation data for the experiment. At the beginning of the second long shutdown of the large hadron collider, the HLT farm including the Sim@P1 infrastructure was upgraded. Previous papers emphasised the need for simple, reliable, and efficient tools and assessed various options to quickly switch between data acquisition operation and offline processing. In this contribution, we describe the new mechanisms put in place for the opportunistic exploitation of the HLT farm for offline processing and give the results from the first months of operation.


2020 ◽  
Vol 245 ◽  
pp. 01031
Author(s):  
Thiago Rafael Fernandez Perez Tomei

The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger, implemented on custom-designed electronics, and the High Level Trigger, a streamlined version of the CMS offline reconstruction software running on a computer farm. During its second phase the LHC will reach a luminosity of 7.5 1034 cm−2 s−1 with a pileup of 200 collisions, producing integrated luminosity greater than 3000 fb−1 over the full experimental run. To fully exploit the higher luminosity, the CMS experiment will introduce a more advanced Level-1 Trigger and increase the full readout rate from 100 kHz to 750 kHz. CMS is designing an efficient data-processing hardware trigger that will include tracking information and high-granularity calorimeter information. The current Level-1 conceptual design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, low-latency system for large throughput and sophisticated data correlation across diverse sources. The higher luminosity, event complexity and input rate present an unprecedented challenge to the High Level Trigger that aims to achieve a similar efficiency and rejection factor as today despite the higher pileup and more pure preselection. In this presentation we will discuss the ongoing studies and prospects for the online reconstruction and selection algorithms for the high-luminosity era.


2019 ◽  
Vol 214 ◽  
pp. 07017
Author(s):  
Jean-Marc Andre ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Olivier Chaze ◽  
...  

The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.


2020 ◽  
Vol 245 ◽  
pp. 05004
Author(s):  
Rosen Matev ◽  
Niklas Nolte ◽  
Alex Pearce

For Run 3 of the Large Hadron Collider, the final stage of the LHCb experiment’s high-level trigger must process 100 GB/s of input data. This corresponds to an input rate of 1 MHz, and is an order of magnitude larger compared to Run 2. The trigger is responsible for selecting all physics signals that form part of the experiment’s broad research programme, and as such defines thousands of analysis-specific selections that together comprise tens of thousands of algorithm instances. The configuration of such a system needs to be extremely flexible to be able to handle the large number of different studies it must accommodate. However, it must also be robust and easy to understand, allowing analysts to implement and understand their own selections without the possibility of error. A Python-based system for configuring the data and control flow of the Gaudi-based trigger application is presented. It is designed to be user-friendly by using functions for modularity and removing indirection layers employed previously in Run 2. Robustness is achieved by reducing global state and instead building the data flow graph in a functional manner, whilst keeping configurability of the full call stack.


2014 ◽  
Vol 31 ◽  
pp. 1460297 ◽  
Author(s):  
Valentina Gori

The CMS experiment has been designed with a 2-level trigger system: the Level 1 Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a tradeoff between the complexity of the algorithms running on the available computing power, the sustainable output rate, and the selection efficiency. Here we will present the performance of the main triggers used during the 2012 data taking, ranging from simpler single-object selections to more complex algorithms combining different objects, and applying analysis-level reconstruction and selection. We will discuss the optimisation of the triggers and the specific techniques to cope with the increasing LHC pile-up, reducing its impact on the physics performance.


2019 ◽  
Vol 214 ◽  
pp. 01046
Author(s):  
Martin—Haugh Stewart

We present an implementation of the ATLAS High Level Trigger (HLT) that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the HLT to meet future challenges from the evolution of computing hardware and upgrades of the Large Hadron Collider (LHC) and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. In the following data-taking period (2026) upgrades to the ATLAS trigger architecture will increase the HLT input rate by a factor of 4-10, while the luminosity will increase by a further factor of 2-3. AthenaMT provides a uniform interface for offine and trigger algorithms, facilitating the use of offine code in the HLT. Trigger-specific optimizations provided by the framework include early event rejection and reconstruction within restricted geometrical regions. We report on the current status, including experience of migrating trigger selections to this new framework,and present the next steps towards a full implementation of the redesigned ATLAS trigger.


2019 ◽  
Vol 214 ◽  
pp. 01006
Author(s):  
Jean-Marc André ◽  
Ulf Behrens ◽  
James Branson ◽  
Philipp Brummer ◽  
Sergio Cittolin ◽  
...  

The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2MB at a rate of 100 kHz. The event builder collects event fragments from about 750 sources and assembles them into complete events which are then handed to the High-Level Trigger (HLT) processes running on O(1000) computers. The aging eventbuilding hardware will be replaced during the long shutdown 2 of the LHC taking place in 2019/20. The future data networks will be based on 100 Gb/s interconnects using Ethernet and Infiniband technologies. More powerful computers may allow to combine the currently separate functionality of the readout and builder units into a single I/O processor handling simultaneously 100 Gb/s of input and output traffic. It might be beneficial to preprocess data originating from specific detector parts or regions before handling it to generic HLT processors. Therefore, we will investigate how specialized coprocessors, e.g. GPUs, could be integrated into the event builder. We will present the envisioned changes to the event-builder compared to today’s system. Initial measurements of the performance of the data networks under the event-building traffic pattern will be shown. Implications of a folded network architecture for the event building and corresponding changes to the software implementation will be discussed.


Sign in / Sign up

Export Citation Format

Share Document