scholarly journals FCC-ee overview: new opportunities create new challenges

2022 ◽  
Vol 137 (1) ◽  
Author(s):  
Alain Blondel ◽  
Patrick Janot

AbstractWith its high luminosity, its clean experimental conditions, and a range of energies that cover the four heaviest particles known today, FCC-ee offers a wealth of physics possibilities, with high potential for discoveries. The FCC-ee is an essential and complementary step towards a 100 TeV hadron collider, and as such offers a uniquely powerful combined physics program. This vision is the backbone of the 2020 European Strategy for Particle Physics. One of the main challenges is now to design experimental systems that can, demonstrably, fully exploit these extraordinary opportunities.

2018 ◽  
Vol 192 ◽  
pp. 00032 ◽  
Author(s):  
Rosamaria Venditti

The High-Luminosity Large Hadron Collider (HL-LHC) is a major upgrade of the LHC, expected to deliver an integrated luminosity of up to 3000/fb over one decade. The very high instantaneous luminosity will lead to about 200 proton-proton collisions per bunch crossing (pileup) superimposed to each event of interest, therefore providing extremely challenging experimental conditions. The scientific goals of the HL-LHC physics program include precise measurement of the properties of the recently discovered standard model Higgs boson and searches for beyond the standard model physics (heavy vector bosons, SUSY, dark matter and exotic long-lived signatures, to name a few). In this contribution we will present the strategy of the CMS experiment to investigate the feasibility of such search and quantify the increase of sensitivity in the HL-LHC scenario.


2019 ◽  
Vol 69 (1) ◽  
pp. 389-415 ◽  
Author(s):  
M. Benedikt ◽  
A. Blondel ◽  
P. Janot ◽  
M. Klein ◽  
M. Mangano ◽  
...  

After 10 years of physics at the Large Hadron Collider (LHC), the particle physics landscape has greatly evolved. Today, a staged Future Circular Collider (FCC), consisting of a luminosity-frontier highest-energy electron–positron collider (FCC-ee) followed by an energy-frontier hadron collider (FCC-hh), promises the most far-reaching physics program for the post-LHC era. FCC-ee will be a precision instrument used to study the Z, W, Higgs, and top particles, and will offer unprecedented sensitivity to signs of new physics. Most of the FCC-ee infrastructure could be reused for FCC-hh, which will provide proton–proton collisions at a center-of-mass energy of 100 TeV and could directly produce new particles with masses of up to several tens of TeV. This collider will also measure the Higgs self-coupling and explore the dynamics of electroweak symmetry breaking. Thermal dark matter candidates will be either discovered or conclusively ruled out by FCC-hh. Heavy-ion and electron–proton collisions (FCC-eh) will further contribute to the breadth of the overall FCC program. The integrated FCC infrastructure will serve the particle physics community through the end of the twenty-first century. This review combines key contents from the first three volumes of the FCC Conceptual Design Report.


2021 ◽  
Vol 251 ◽  
pp. 03061
Author(s):  
Gordon Watts

Array operations are one of the most concise ways of expressing common filtering and simple aggregation operations that are the hallmark of a particle physics analysis: selection, filtering, basic vector operations, and filling histograms. The High Luminosity run of the Large Hadron Collider (HL-LHC), scheduled to start in 2026, will require physicists to regularly skim datasets that are over a PB in size, and repeatedly run over datasets that are 100’s of TB’s – too big to fit in memory. Declarative programming techniques are a way of separating the intent of the physicist from the mechanics of finding the data and using distributed computing to process and make histograms. This paper describes a library that implements a declarative distributed framework based on array programming. This prototype library provides a framework for different sub-systems to cooperate in producing plots via plug-in’s. This prototype has a ServiceX data-delivery sub-system and an awkward array sub-system cooperating to generate requested data or plots. The ServiceX system runs against ATLAS xAOD data and flat ROOT TTree’s and awkward on the columnar data produced by ServiceX.


2005 ◽  
Vol 20 (22) ◽  
pp. 5276-5286
Author(s):  
JAMES E. BRAU

Research and development of detector technology are critical to the future particle physics program. The goals of the International Linear Collider, in particular, require advances that are challenging, despite the progress driven in recent years by the needs of the Large Hadron Collider. The ILC detector goals and challenges are described and the program to address them is summarized.


2022 ◽  
Vol 17 (01) ◽  
pp. C01011
Author(s):  
A. Samalan ◽  
M. Tytgat ◽  
G.A. Alves ◽  
F. Marujo ◽  
F. Torres Da Silva De Araujo ◽  
...  

Abstract During the upcoming High Luminosity phase of the Large Hadron Collider (HL-LHC), the integrated luminosity of the accelerator will increase to 3000 fb−1. The expected experimental conditions in that period in terms of background rates, event pileup, and the probable aging of the current detectors present a challenge for all the existing experiments at the LHC, including the Compact Muon Solenoid (CMS) experiment. To ensure a highly performing muon system for this period, several upgrades of the Resistive Plate Chamber (RPC) system of the CMS are currently being implemented. These include the replacement of the readout system for the present system, and the installation of two new RPC stations with improved chamber and front-end electronics designs. The current overall status of this CMS RPC upgrade project is presented.


2020 ◽  
Vol 35 (33) ◽  
pp. 2030022
Author(s):  
Aleksandr Alekseev ◽  
Simone Campana ◽  
Xavier Espinal ◽  
Stephane Jezequel ◽  
Andrey Kirianov ◽  
...  

The experiments at CERN’s Large Hadron Collider use the Worldwide LHC Computing Grid, the WLCG, for its distributed computing infrastructure. Through the distributed workload and data management systems, they provide seamless access to hundreds of grid, HPC and cloud based computing and storage resources that are distributed worldwide to thousands of physicists. LHC experiments annually process more than an exabyte of data using an average of 500,000 distributed CPU cores, to enable hundreds of new scientific results from the collider. However, the resources available to the experiments have been insufficient to meet data processing, simulation and analysis needs over the past five years as the volume of data from the LHC has grown. The problem will be even more severe for the next LHC phases. High Luminosity LHC will be a multiexabyte challenge where the envisaged Storage and Compute needs are a factor 10 to 100 above the expected technology evolution. The particle physics community needs to evolve current computing and data organization models in order to introduce changes in the way it uses and manages the infrastructure, focused on optimizations to bring performance and efficiency not forgetting simplification of operations. In this paper we highlight a recent R&D project related to scientific data lake and federated data storage.


Author(s):  
S. A. Antipov ◽  
N. Biancacci ◽  
J. Komppula ◽  
E. Métral ◽  
B. Salvant ◽  
...  

2001 ◽  
Vol 16 (supp01a) ◽  
pp. 425-427
Author(s):  
◽  
WENDY TAYLOR

In the spring of 2001, the upgraded Fermilab Tevatron will begin its collider physics run with [Formula: see text] collisions at [Formula: see text], where it is expected to deliver an integrated luminosity of 2 fb -1 in the first two years. The DØ detector is undergoing an extensive upgrade in order to take full advantage of the high luminosity running conditions. The upgraded detector's new silicon vertex detector, fiber tracker, and lepton trigger capabilities make a rich B physics program possible at DØ. This paper describes the prospects for several DØ B physics measurements, including CP violation in [Formula: see text] decays, Bs mixing and [Formula: see text] lifetime.


2021 ◽  
Vol 251 ◽  
pp. 02054
Author(s):  
Olga Sunneborn Gudnadottir ◽  
Daniel Gedon ◽  
Colin Desmarais ◽  
Karl Bengtsson Bernander ◽  
Raazesh Sainudiin ◽  
...  

In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.


Sign in / Sign up

Export Citation Format

Share Document