GridPP: the UK grid for particle physics

Author(s):  
D. Britton ◽  
A.J. Cass ◽  
P.E.L. Clarke ◽  
J. Coles ◽  
D.J. Colling ◽  
...  

The start-up of the Large Hadron Collider (LHC) at CERN, Geneva, presents a huge challenge in processing and analysing the vast amounts of scientific data that will be produced. The architecture of the worldwide grid that will handle 15 PB of particle physics data annually from this machine is based on a hierarchical tiered structure. We describe the development of the UK component (GridPP) of this grid from a prototype system to a full exploitation grid for real data analysis. This includes the physical infrastructure, the deployment of middleware, operational experience and the initial exploitation by the major LHC experiments.

2020 ◽  
Vol 35 (33) ◽  
pp. 2030022
Author(s):  
Aleksandr Alekseev ◽  
Simone Campana ◽  
Xavier Espinal ◽  
Stephane Jezequel ◽  
Andrey Kirianov ◽  
...  

The experiments at CERN’s Large Hadron Collider use the Worldwide LHC Computing Grid, the WLCG, for its distributed computing infrastructure. Through the distributed workload and data management systems, they provide seamless access to hundreds of grid, HPC and cloud based computing and storage resources that are distributed worldwide to thousands of physicists. LHC experiments annually process more than an exabyte of data using an average of 500,000 distributed CPU cores, to enable hundreds of new scientific results from the collider. However, the resources available to the experiments have been insufficient to meet data processing, simulation and analysis needs over the past five years as the volume of data from the LHC has grown. The problem will be even more severe for the next LHC phases. High Luminosity LHC will be a multiexabyte challenge where the envisaged Storage and Compute needs are a factor 10 to 100 above the expected technology evolution. The particle physics community needs to evolve current computing and data organization models in order to introduce changes in the way it uses and manages the infrastructure, focused on optimizations to bring performance and efficiency not forgetting simplification of operations. In this paper we highlight a recent R&D project related to scientific data lake and federated data storage.


Author(s):  
Paul Collier

The Large Hadron Collider (LHC) is a 27 km circumference hadron collider, built at CERN to explore the energy frontier of particle physics. Approved in 1994, it was commissioned and began operation for data taking in 2009. The design and construction of the LHC presented many design, engineering and logistical challenges which involved pushing a number of technologies well beyond their level at the time. Since the start-up of the machine, there has been a very successful 3-year run with an impressive amount of data delivered to the LHC experiments. With an increasingly large stored energy in the beam, the operation of the machine itself presented many challenges and some of these will be discussed. Finally, the planning for the next 20 years has been outlined with progressive upgrades of the machine, first to nominal energy, then to progressively higher collision rates. At each stage the technical challenges are illustrated with a few examples.


2014 ◽  
Vol 23 (2) ◽  
pp. 169-191 ◽  
Author(s):  
Neil McHugh ◽  
Morag Gillespie ◽  
Jana Loew ◽  
Cam Donaldson

While lending for small businesses and business start-up is a long-standing feature of economic policy in the UK and Scotland, little is known about the support available for those taking the first steps into self-employment, particularly people from poorer communities. This paper presents the results of a project that aimed to address this gap. It mapped provision of support for enterprise, including microcredit (small loans for enterprise of £5,000 or less) and grants available to people in deprived communities. It found more programmes offering grants than loans. Grants programmes, although more likely to be time limited and often linked to European funding, were generally better targeted to poor communities than loan programmes that were more financially sustainable. The introduction of the Grameen Bank to Scotland will increase access to microcredit, but this paper argues that there is a place – and a need – for both loans and grants to support enterprise development across Scotland. A Scottish economic strategy should take account of all levels of enterprise development and, in striving towards a fairer Scotland, should ensure that the poorest people and communities are not excluded from self-employment because of the lack of small amounts of support necessary to take the first steps.


Author(s):  
Ákos Vinkó ◽  
Péter Bocz

The increasing demands for guided transportation modes in urban areas generate the need of high-frequency services. Due to the frequent services, the track deterioration process will be accelerated. Therefore, the exact knowledge of track quality is highly important for every railway company to provide high quality service level.For monitoring of tramway tracks, an unconventional vehicle dynamics measurement setup is developed, which records the data of 3-axes wireless accelerometers mounted on wheel discs of regular in-service tram. In the implementation of prototype system, the bogie side-frame and car body mounted sensors are also fitted to the instrumented vehicle to compare the efficiency of these conventional solutions with the developed arrangement. At the first test period, the instrumented vehicle works as a dedicated inspection vehicle, in order to keep the constant velocity and help to determine the influencing factors on results. Accelerations are processed to obtain the track irregularities, in order to determine whether the track needs to be repaired. Real data come from measurements taken on tram line 49 of the Budapest (Hungary) and they have been validated by comparing results to the actual state of the track provided by a track geometry monitoring trolley and visual inspection. This paper presents the developed methods used for validation and the analysis of preliminary results of the wheel discs mounted accelerometers. This vehicle dynamic measurement system is cheap to implement and no significant modification of the vehicle is required. Therefore, in-service vehicles equipped with this system may serve a good opportunity for monitoring tramway track, while it multiple passes over same track section.


2012 ◽  
Vol 522 ◽  
pp. 770-775
Author(s):  
Yu Zheng ◽  
Yan Rong Ni ◽  
Deng Zhe Ma

In order to satisfy the needs of fast and convenient customization of manufacturing scientific data sharing service, the data service customization process and its key technologies were studied. First the data resource model and the customization oriented professional data service model were studied. Then the processes of service customization, from resource registration, service definition, service parsing, to service generating, were analyzed. The parsing engine based on service parsing technology and incubator based on service generating technology was emphasized. Finally the prototype system was developed and validated by an example.


2005 ◽  
Vol 19 (1) ◽  
pp. 75-81
Author(s):  
Stratis Koutsoukos ◽  
John Shutt ◽  
John Sutherland

The authors were the principals in the evaluation of the Prince's Trust's Young People's Business Start-up Programme, 1994–1999, as it operated nationally in the UK and in the Yorkshire and the Humber region of England. In this paper they report the methodologies used in the evaluation and the key findings. They then use their reflections on both the research process and its outcomes to comment on small business policy.


2021 ◽  
Vol 251 ◽  
pp. 02054
Author(s):  
Olga Sunneborn Gudnadottir ◽  
Daniel Gedon ◽  
Colin Desmarais ◽  
Karl Bengtsson Bernander ◽  
Raazesh Sainudiin ◽  
...  

In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.


2021 ◽  
Vol 9 ◽  
Author(s):  
N. Demaria

The High Luminosity Large Hadron Collider (HL-LHC) at CERN will constitute a new frontier for the particle physics after the year 2027. Experiments will undertake a major upgrade in order to stand this challenge: the use of innovative sensors and electronics will have a main role in this. This paper describes the recent developments in 65 nm CMOS technology for readout ASIC chips in future High Energy Physics (HEP) experiments. These allow unprecedented performance in terms of speed, noise, power consumption and granularity of the tracking detectors.


2021 ◽  
Vol 16 (11) ◽  
pp. P11014
Author(s):  
M. Abbas ◽  
M. Abbrescia ◽  
H. Abdalla ◽  
A. Abdelalim ◽  
S. AbuZeid ◽  
...  

Abstract After the Phase-2 high-luminosity upgrade to the Large Hadron Collider (LHC), the collision rate and therefore the background rate will significantly increase, particularly in the high η region. To improve both the tracking and triggering of muons, the Compact Muon Solenoid (CMS) Collaboration plans to install triple-layer Gas Electron Multiplier (GEM) detectors in the CMS muon endcaps. Demonstrator GEM detectors were installed in CMS during 2017 to gain operational experience and perform a preliminary investigation of detector performance. We present the results of triple-GEM detector performance studies performed in situ during normal CMS and LHC operations in 2018. The distribution of cluster size and the efficiency to reconstruct high pT muons in proton-proton collisions are presented as well as the measurement of the environmental background rate to produce hits in the GEM detector.


Sign in / Sign up

Export Citation Format

Share Document