scholarly journals Enabling ATLAS big data processing on Piz Daint at CSCS

2020 ◽  
Vol 245 ◽  
pp. 09005
Author(s):  
F G Sciacca

Predictions for requirements for the LHC computing for Run 3 and Run 4 (HLLHC) over the course of the next 10 years show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. Concentrating computational resources in fewer larger and more efficient centres should increase the cost-efficiency of the operation and, thus, of the data processing. Large scale general purpose HPC centres could play a crucial role in such a model. We report on the technical challenges and solutions adopted to enable the processing of the ATLAS experiment data on the European flagship HPC Piz Daint at CSCS, now acting as a pledged WLCG Tier-2 centre. As the transition of the Tier-2 from classic to HPC resources has been finalised, we also report on performance figures over two years of production running and on efforts for a deeper integration of the HPC resource within the ATLAS computing framework at different tiers.

2019 ◽  
Vol 214 ◽  
pp. 03023
Author(s):  
F G Sciacca ◽  
M Weber

Prediction for requirements for the LHC computing for Run 3 and for Run 4 (HL-LHC) over the course of the next 10 year, show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. The use of large scale computational resources at HPC centres worldwide is expected to increase substantially the cost-efficiency of the processing. In order to pave the path towards the HL-LHC data processing, the Swiss Institute of Particle Physics (CHIPP) has taken the strategic decision to migrate the processing of all the Tier-2 workloads for ATLAS and other LHC experiments from a dedicated x86 ̲ 64 cluster that has been in continuous operation and evolution since 2007, to Piz Daint, the current European flagship HPC, which ranks third in the TOP500 at the time of writing. We report on the technical challenges and solutions adopted to migrate to Piz Daint, and on the experience and measured performance for ATLAS in over one year of running in production.


Author(s):  
Harald Kruggel-Emden ◽  
Frantisek Stepanek ◽  
Ante Munjiza

The time- and event-driven discrete element methods are more and more applied to realistic industrial scale applications. However, they are still computational very demanding. Realistic modeling is often limited or even impeded by the cost of the computational resources required. In this paper the time-driven and event-driven discrete element methods are reviewed addressing especially the available algorithms. Their options for simultaneously modeling an interstitial fluid are discussed. A potential extension of the time-driven method currently under development functioning as a link between event- and time-driven methods is suggested and shortly addressed.


Author(s):  
Zahid Raza ◽  
Deo P. Vidyarthi

Grid is a parallel and distributed computing network system comprising of heterogeneous computing resources spread over multiple administrative domains that offers high throughput computing. Since the Grid operates at a large scale, there is always a possibility of failure ranging from hardware to software. The penalty paid of these failures may be on a very large scale. System needs to be tolerant to various possible failures which, in spite of many precautions, are bound to happen. Replication is a strategy often used to introduce fault tolerance in the system to ensure successful execution of the job, even when some of the computational resources fail. Though replication incurs a heavy cost, a selective degree of replication can offer a good compromise between the performance and the cost. This chapter proposes a co-scheduler that can be integrated with main scheduler for the execution of the jobs submitted to computational Grid. The main scheduler may have any performance optimization criteria; the integration of co-scheduler will be an added advantage towards fault tolerance. The chapter evaluates the performance of the co-scheduler with the main scheduler designed to minimize the turnaround time of a modular job by introducing module replication to counter the effects of node failures in a Grid. Simulation study reveals that the model works well under various conditions resulting in a graceful degradation of the scheduler’s performance with improving the overall reliability offered to the job.


1983 ◽  
Vol 213 (2-3) ◽  
pp. 317-327 ◽  
Author(s):  
Jun Kokame ◽  
Motonobu Takano ◽  
Tomoko Oshikubo ◽  
Kurazo Chiba ◽  
Kumataro Ukai ◽  
...  

Very-large-scale integration (VLSI) offers new opportunities in computer architecture. The cost of a processor has been reduced to that of a few thousand bytes of memory, with the result that parallel computers can be constructed as easily and economically as their sequential predecessors. In particular, a parallel computer constructed by replication of a standard computing element is well suited to the mass-production economics of the technology. The emergence of the new parallel computers has stimulated the development of new programming languages and algorithms. One example is the Occam language which has been designed to enable applications to be expressed in a form suitable for execution on a variety of parallel architectures. Further developments in language and architecture will enable processing resources to be allocated and deallocated as freely as memory, giving rise to some hope that users of general-purpose parallel computers will be freed from the current need to design algorithms to suit specific architectures.


2018 ◽  
Author(s):  
Naomi Liza Indigo ◽  
James Smith ◽  
Jonathan K Webb ◽  
Ben Phillips

The uptake of baits is a key variable in management actions aimed at the vaccination, training, or control of many vertebrate species. Increasingly, however, it is appreciated that individuals of the target species vary in their likelihood of taking baits. To optimise a baiting program, then, we require knowledge, not only on the rate of bait uptake, and how this changes with bait availability, but also knowledge on the proportion of the target population that will take a bait. The invasive cane toad (Rhinella marina) is a major threat to northern quolls (Dasyurus hallucatus), which are poisoned when they attack this novel toxic prey item. Conditioned taste aversion baits (cane toad sausages) can be delivered in the field to train individual northern quolls to avoid toads. Here we report on a large-scale field trial across eleven sites across one large property in Western Australia. Camera trapping and statistical modelling was used to estimate the proportion of baitable animals in the population, and the proportion of these that were baited at varying bait availabilities. Population estimates varied at each site from 3.5 (0.76 SD) to 18 (1.58 SD) individual quolls per site, resulting in a range across sites of 0.6-4 baits available per individual. Bait uptake increased with increasing bait availability. We also estimate that only 62% of individual quolls are baitable, and that a baiting rate of 3 baits per individual (rather than per area) will result in almost all of these baitable individuals being treated. We compared our statistical method with prior data informing the probability of being baitable; and with probability of being baitable set to 1; this resulted in largely differing estimates in relation to an appropriate baiting rate. Data and models such as ours provide wildlife managers with information critical to informed decision making and are fundamental to estimate the cost-efficiency of any baiting campaign.


Author(s):  
Eric MORALES-AGUILAR ◽  
Selma E. SANTILLAN-FLORES ◽  
Juan M. GONZÁLEZ-LÓPEZ ◽  
Efrain VILLALVAZO-LAUREANO

This paper proposes the design and construction of a didactic vending machine, students pretend to be immersed in a work with delivery date as many companies that work for projects do. The project of vending machine to be developed will have as parameters that dispatch four different products, with a control panel, a 16 x 2 LCD screen which shows the cost of the chosen product or product, will have to give change and its data processing is via Arduino, a 3D simulation is carried out to ensure the compatibility of all components, an innovation presented by the prototype to be developed is that it sends a text message when a product is about to run out, with the product description, the number of machine and its location, this provides the supplier with better control over their large-scale machines. A comprehensive financial investment analysis is performed to ensure the viability of the project.


2020 ◽  
Vol 54 (6) ◽  
pp. 1775-1791
Author(s):  
Nazila Aghayi ◽  
Samira Salehpour

The concept of cost efficiency has become tremendously popular in data envelopment analysis (DEA) as it serves to assess a decision-making unit (DMU) in terms of producing minimum-cost outputs. A large variety of precise and imprecise models have been put forward to measure cost efficiency for the DMUs which have a role in constructing the production possibility set; yet, there’s not an extensive literature on the cost efficiency (CE) measurement for sample DMUs (SDMUs). In an effort to remedy the shortcomings of current models, herein is introduced a generalized cost efficiency model that is capable of operating in a fuzzy environment-involving different types of fuzzy numbers-while preserving the Farrell’s decomposition of cost efficiency. Moreover, to the best of our knowledge, the present paper is the first to measure cost efficiency by using vectors. Ultimately, a useful example is provided to confirm the applicability of the proposed methods.


Sign in / Sign up

Export Citation Format

Share Document