scholarly journals The CMS Electromagnetic Calorimeter workflow

2020 ◽  
Vol 245 ◽  
pp. 01024
Author(s):  
Chiara Rovelli

The CMS experiment at the LHC features an electromagnetic calorimeter (ECAL) made of lead tungstate scintillating crystals. The ECAL energy response is fundamental for both triggering purposes and offline analysis. Due to the challenging LHC radiation environment, the response of both crystals and photodetectors to particles evolves with time. Therefore continuous monitoring and correction of the ageing effects are crucial. Fast, reliable and efficient workflows are set up to have a first set of corrections computed within 48 hours from data-taking, making use of dedicated data streams and processing. Such corrections, stored in relational databases, are then accessed during the prompt offline reconstruction of the CMS data. Twice a week, the calibrations used in the trigger are also updated in the database and accessed during the data-taking. In this note, the design of the CMS ECAL data handling and processing is reviewed.

2018 ◽  
Vol 46 ◽  
pp. 1860055
Author(s):  
Somnath Choudhury

Since Run-1 of the LHC, CMS has taken the opportunity to improve further particle reconstruction. A number of improvements were made to the hadronic tau reconstruction and identification algorithms. In particular, the reconstruction of the tau decay products leaving deposits in the electromagnetic calorimeter was improved to better model signal of [Formula: see text] from [Formula: see text] decays. This modification improves energy response and removes the tau footprint from isolation area. In addition to this, improvements were made to discriminators that combine isolation and tau lifetime variables, and the rejection of electrons misidentified as hadronic taus was improved using multivariate techniques. The results of these improvements using 13 TeV data at LHC Run-2 are presented and validation of tau identification using a variety of techniques has been highlighted.


2020 ◽  
Vol 245 ◽  
pp. 09007
Author(s):  
Carles Acosta-Silva ◽  
Antonio Delgado Peris ◽  
José Flix Molina ◽  
Jaime Frey ◽  
José M. Hernández ◽  
...  

In view of the increasing computing needs for the HL-LHC era, the LHC experiments are exploring new ways to access, integrate and use non-Grid compute resources. Accessing and making efficient use of Cloud and High Performance Computing (HPC) resources present a diversity of challenges for the CMS experiment. In particular, network limitations at the compute nodes in HPC centers prevent CMS pilot jobs to connect to its central HTCondor pool in order to receive payload jobs to be executed. To cope with this limitation, new features have been developed in both HTCondor and the CMS resource acquisition and workload management infrastructure. In this novel approach, a bridge node is set up outside the HPC center and the communications between HTCondor daemons are relayed through a shared file system. This conforms the basis of the CMS strategy to enable the exploitation of the Barcelona Supercomputing Center (BSC) resources, the main Spanish HPC site. CMS payloads are claimed by HTCondor condor_startd daemons running at the nearby PIC Tier-1 center and routed to BSC compute nodes through the bridge. This fully enables the connectivity of CMS HTCondor-based central infrastructure to BSC resources via the PIC HTCondor pool. Other challenges include building custom singularity images with CMS software releases, bringing conditions data to payload jobs, and custom data handling between BSC and PIC. This report describes the initial technical prototype, its deployment and tests, and future steps. A key aspect of the technique described in this contribution is that it could be universally employed in similar network-restrictive HPC environments elsewhere.


2012 ◽  
Vol 184 (1) ◽  
pp. 196-214 ◽  
Author(s):  
Xiaofeng Ding ◽  
Xiang Lian ◽  
Lei Chen ◽  
Hai Jin

2018 ◽  
Vol 7 (3.1) ◽  
pp. 63 ◽  
Author(s):  
R Revathy ◽  
R Aroul Canessane

Data are vital to help decision making. On the off chance that data have low veracity, choices are not liable to be sound. Internet of Things (IoT) quality rates big data with error, irregularity, deficiency, trickery, and model guess. Improving data veracity is critical to address these difficulties. In this article, we condense the key qualities and difficulties of IoT, which impact data handling and decision making. We audit the scene of estimating and upgrading data veracity and mining indeterminate data streams. Also, we propose five suggestions for future advancement of veracious big IoT data investigation that are identified with the heterogeneous and appropriated nature of IoT data, self-governing basic leadership, setting mindful and area streamlined philosophies, data cleaning and handling procedures for IoT edge gadgets, and protection safeguarding, customized, and secure data administration.  


2016 ◽  
Author(s):  
Mikhail Ippolitov ◽  
Valery Lebedev ◽  
Vladislav Manko ◽  
Iouri Sibiriak ◽  
Alexander Akindinov ◽  
...  

2017 ◽  
Vol 60 (1) ◽  
pp. 28-34 ◽  
Author(s):  
M. S. Ippolitov ◽  
V. A. Lebedev ◽  
V. I. Manko ◽  
Yu. G. Sibiriak ◽  
A. V. Akindinov ◽  
...  

Author(s):  
Michele De Gruttola ◽  
Salvatore Di Guida ◽  
Vincenzo Innocente ◽  
Antonio Pierro

Sign in / Sign up

Export Citation Format

Share Document