scholarly journals Adapting ATLAS@Home to trusted and semi-trusted resources

2020 ◽  
Vol 245 ◽  
pp. 03027
Author(s):  
David Cameron ◽  
Vincent Garonne ◽  
Paul Millar ◽  
Shaojun Sun ◽  
Wenjing Wu

ATLAS@Home is a volunteer computing project which enables members of the public to contribute computing power to run simulations of the ATLAS experiment at CERN’s Large Hadron Collider. The computing resources provided to ATLAS@Home increasingly come not only from traditional volunteers, but also from data centres or office computers at institutes associated to ATLAS. The design of ATLAS@Home was built around not giving out sensitive credentials to volunteers, which means that a sandbox is needed to bridge data transfers between trusted and untrusted domains. As the scale of ATLAS@Home increases, this sandbox becomes a potential data management bottleneck. This paper explores solutions to this problem based on relaxing the constraints of sending credentials to trusted volunteers, allowing direct data transfer to grid storage and avoiding the intermediate sandbox. Fully trusted resources such as grid worker nodes can run with full access to grid storage, whereas semi-trusted resources such as student desktops can be provided with “macaroons”: time-limited access tokens which can only be used for specific files. The steps towards implementing these solutions as well as initial results with real ATLAS simulation tasks are discussed along with the experience gained so far and the next steps in the project.

2019 ◽  
Vol 214 ◽  
pp. 03011 ◽  
Author(s):  
David Cameron ◽  
Wenjing Wu ◽  
Alexander Bogdanchikov ◽  
Riccardo Bianchi

The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower for data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm.


2005 ◽  
Vol 20 (16) ◽  
pp. 3871-3873 ◽  
Author(s):  
DAVID MALON

Each new generation of collider experiments confronts the challenge of delivering an event store having at least the performance and functionality of current-generation stores, in the presence of an order of magnitude more data and new computing paradigms (object orientation just a few years ago; grid and service-based computing today). The ATLAS experiment at the Large Hadron Collider, for example, will produce 1.6-megabyte events at 200 Hz–an annual raw data volume of 3.2 petabytes. With derived and simulated data, the total volume may approach 10 petabytes per year. Scale, however, is not the only challenge. In the Large Hadron Collider (LHC) experiments, the preponderance of computing power will come from outside the host laboratory. More significantly, no single site will host a complete copy of the event store–data will be distributed, not simply replicated for convenience, and many physics analyses will routinely require distributed (grid) computing. This paper uses the emerging ATLAS computing model to provide a glimpse of how next-generation event stores are taking shape, touching on key issues in navigation, distribution, scale, coherence, data models and representation, metadata infrastructure, and the role(s) of databases in event store management.


2017 ◽  
Vol 7 (1) ◽  
pp. 379-393 ◽  
Author(s):  
Javier Barranco ◽  
Yunhai Cai ◽  
David Cameron ◽  
Matthew Crouch ◽  
Riccardo De Maria ◽  
...  

AbstractThe LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.


Science ◽  
2010 ◽  
Vol 329 (5997) ◽  
pp. 1305-1305 ◽  
Author(s):  
B. Knispel ◽  
B. Allen ◽  
J. M. Cordes ◽  
J. S. Deneva ◽  
D. Anderson ◽  
...  

Einstein@Home aggregates the computer power of hundreds of thousands of volunteers from 192 countries to mine large data sets. It has now found a 40.8-hertz isolated pulsar in radio survey data from the Arecibo Observatory taken in February 2007. Additional timing observations indicate that this pulsar is likely a disrupted recycled pulsar. PSR J2007+2722’s pulse profile is remarkably wide with emission over almost the entire spin period; the pulsar likely has closely aligned magnetic and spin axes. The massive computing power provided by volunteers should enable many more such discoveries.


2015 ◽  
Vol 8 (10) ◽  
pp. 8981-9020 ◽  
Author(s):  
C. Zhang ◽  
L. Liu ◽  
G. Yang ◽  
R. Li ◽  
B. Wang

Abstract. Data transfer, which means transferring data fields between two component models or rearranging data fields among processes of the same component model, is a fundamental operation of a coupler. Most of state-of-the-art coupler versions currently use an implementation based on the point-to-point (P2P) communication of the Message Passing Interface (MPI) (call such an implementation "P2P implementation" for short). In this paper, we reveal the drawbacks of the P2P implementation, including low communication bandwidth due to small message size, variable and big number of MPI messages, and jams during communication. To overcome these drawbacks, we propose a butterfly implementation for data transfer. Although the butterfly implementation can outperform the P2P implementation in many cases, it degrades the performance in some cases because the total message size transferred by the butterfly implementation is larger than that by the P2P implementation. To make the data transfer completely improved, we design and implement an adaptive data transfer library that combines the advantages of both butterfly implementation and P2P implementation. Performance evaluation shows that the adaptive data transfer library significantly improves the performance of data transfer in most cases and does not decrease the performance in any cases. Now the adaptive data transfer library is open to the public and has been imported into a coupler version C-Coupler1 for performance improvement of data transfer. We believe that it can also improve other coupler versions.


2021 ◽  
Vol 2105 (1) ◽  
pp. 012026
Author(s):  
Stamatios Tzanos

Abstract In conjunction with the High Luminosity upgrade of the Large Hadron Collider accelerator at CERN, the ATLAS detector is also undergoing an upgrade to handle the significantly higher data rates. The muon end-cap system upgrade in ATLAS, lies with the replacement of the Small Wheel. The New Small Wheel (NSW) is expected to combine high tracking precision with upgraded information for the Level-1 trigger. To accomplish this, small Thin Gap Chamber (sTGC) and MicroMegas detector technologies are being deployed. Due to their installation location in ATLAS, the effects of Barrel Toroid and End-Cap Toroid magnets on NSW must be measured. For the final experiment at ATLAS, each sTGC large double wedge will be equipped with magnetic field Hall effect sensors to monitor the magnetic field near the NSW. The readout is done with an Embedded Local Monitor Board (ELMB) called MDT DCS Module (MDM). For the integration of this hardware in the experiment, first, a detector control system was developed to test the functionality of all sensors before their installation on the detectors. Subsequently, another detector control system was developed for the commissioning of the sensors. Finally, a detector control system based on the above two is under development for the expert panels of ATLAS experiment. In this paper, the sensor readout, the connectivity mapping and the detector control systems will be presented.


2017 ◽  
Vol 17 (2) ◽  
pp. 77-86
Author(s):  
Isabelle Darmon ◽  
Carlos Frade

This article addresses some fundamental affinities between theatre and teaching and is based on emerging work in a long-term experiment which we began in the conference ‘Weber/Simmel Antagonisms: Staged Dialogues’, held at the University of Edinburgh on December 2015. Aimed at exploring the possibilities of the theatrical and dialogical forms for teaching the classics of social and cultural theory, it is a risky experiment whose initial results are presented in this special issue. In order to introduce the dialogues and situate them in the context of the broader project, the article does three things: first, it expounds the process of subjectivation at work in both theatre and teaching and explores some of the modalities of the subjective shift sought for in the public and students. Second, it explains the specificity of this experiment by contrasting it with other uses of theatrical dialogue in teaching. Finally, before briefly introducing each of the dialogues, the article clarifies the fundamental difference between the dialogical form and debate, as radically separating them is at the heart of any experiment in subjectivation, away from the stirring of opinions.


2019 ◽  
Vol 214 ◽  
pp. 07007
Author(s):  
Petr Fedchenkov ◽  
Andrey Shevel ◽  
Sergey Khoruzhnikov ◽  
Oleg Sadov ◽  
Oleg Lazo ◽  
...  

ITMO University (ifmo.ru) is developing the cloud of geographically distributed data centres. The geographically distributed means data centres (DC) located in different places far from each other by hundreds or thousands of kilometres. Usage of the geographically distributed data centres promises a number of advantages for end users such as opportunity to add additional DC and service availability through redundancy and geographical distribution. Services like data transfer, computing, and data storage are provided to users in the form of virtual objects including virtual machines, virtual storage, virtual data transfer link.


2019 ◽  
Vol 214 ◽  
pp. 01034
Author(s):  
Ralf Spiwoks ◽  
Aaron Armbruster ◽  
German Carrillo-Montoya ◽  
Magda Chelstowska ◽  
Patrick Czodrowski ◽  
...  

The Muon to Central Trigger Processor Interface (MUCTPI) of the ATLAS experiment at the Large Hadron Collider(LHC) at CERN is being upgraded for the next run of the LHC in order to use optical inputs and to provide full-precision information for muon candidates to the topological trigger processor (L1TOPO) of the Level-1 trigger system. The new MUCTPI is implemented as a single ATCA blade with high-end processing FPGAs which eliminate doublecounting of muon candidates in overlapping regions, send muon candidates to L1TOPO, and muon multiplicities tothe Central Trigger Processor (CTP), as well as readout data to the data acquisition system of the experiment. A Xilinx Zynq System-on-Chip (SoC) with a programmable logic part and a processor part is used for the communication to the processing FPGAs and the run control system. The processor part, based on ARM processor cores, is running embedded Linux prepared using the framework of the Linux Foundation's Yocto project. The ATLAS run control software was ported to the processor part and a run control application was developed which receives, at configuration, all data necessary for the overlap handling and candidate counting of the processing FPGAs. During running, the application provides ample monitoring of the physics data and of the operation of the hardware. *


Sign in / Sign up

Export Citation Format

Share Document