scholarly journals Production experience and performance for ATLAS data processing on a Cray XC-50 at CSCS

2019 ◽  
Vol 214 ◽  
pp. 03023
Author(s):  
F G Sciacca ◽  
M Weber

Prediction for requirements for the LHC computing for Run 3 and for Run 4 (HL-LHC) over the course of the next 10 year, show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. The use of large scale computational resources at HPC centres worldwide is expected to increase substantially the cost-efficiency of the processing. In order to pave the path towards the HL-LHC data processing, the Swiss Institute of Particle Physics (CHIPP) has taken the strategic decision to migrate the processing of all the Tier-2 workloads for ATLAS and other LHC experiments from a dedicated x86 ̲ 64 cluster that has been in continuous operation and evolution since 2007, to Piz Daint, the current European flagship HPC, which ranks third in the TOP500 at the time of writing. We report on the technical challenges and solutions adopted to migrate to Piz Daint, and on the experience and measured performance for ATLAS in over one year of running in production.

2020 ◽  
Vol 245 ◽  
pp. 09005
Author(s):  
F G Sciacca

Predictions for requirements for the LHC computing for Run 3 and Run 4 (HLLHC) over the course of the next 10 years show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. Concentrating computational resources in fewer larger and more efficient centres should increase the cost-efficiency of the operation and, thus, of the data processing. Large scale general purpose HPC centres could play a crucial role in such a model. We report on the technical challenges and solutions adopted to enable the processing of the ATLAS experiment data on the European flagship HPC Piz Daint at CSCS, now acting as a pledged WLCG Tier-2 centre. As the transition of the Tier-2 from classic to HPC resources has been finalised, we also report on performance figures over two years of production running and on efforts for a deeper integration of the HPC resource within the ATLAS computing framework at different tiers.


Author(s):  
Harald Kruggel-Emden ◽  
Frantisek Stepanek ◽  
Ante Munjiza

The time- and event-driven discrete element methods are more and more applied to realistic industrial scale applications. However, they are still computational very demanding. Realistic modeling is often limited or even impeded by the cost of the computational resources required. In this paper the time-driven and event-driven discrete element methods are reviewed addressing especially the available algorithms. Their options for simultaneously modeling an interstitial fluid are discussed. A potential extension of the time-driven method currently under development functioning as a link between event- and time-driven methods is suggested and shortly addressed.


2017 ◽  
Vol 14 (2) ◽  
pp. 172988141666366 ◽  
Author(s):  
Imen Chaari ◽  
Anis Koubaa ◽  
Hachemi Bennaceur ◽  
Adel Ammar ◽  
Maram Alajlan ◽  
...  

This article presents the results of the 2-year iroboapp research project that aims at devising path planning algorithms for large grid maps with much faster execution times while tolerating very small slacks with respect to the optimal path. We investigated both exact and heuristic methods. We contributed with the design, analysis, evaluation, implementation and experimentation of several algorithms for grid map path planning for both exact and heuristic methods. We also designed an innovative algorithm called relaxed A-star that has linear complexity with relaxed constraints, which provides near-optimal solutions with an extremely reduced execution time as compared to A-star. We evaluated the performance of the different algorithms and concluded that relaxed A-star is the best path planner as it provides a good trade-off among all the metrics, but we noticed that heuristic methods have good features that can be exploited to improve the solution of the relaxed exact method. This led us to design new hybrid algorithms that combine our relaxed A-star with heuristic methods which improve the solution quality of relaxed A-star at the cost of slightly higher execution time, while remaining much faster than A* for large-scale problems. Finally, we demonstrate how to integrate the relaxed A-star algorithm in the robot operating system as a global path planner and show that it outperforms its default path planner with an execution time 38% faster on average.


Author(s):  
Zahid Raza ◽  
Deo P. Vidyarthi

Grid is a parallel and distributed computing network system comprising of heterogeneous computing resources spread over multiple administrative domains that offers high throughput computing. Since the Grid operates at a large scale, there is always a possibility of failure ranging from hardware to software. The penalty paid of these failures may be on a very large scale. System needs to be tolerant to various possible failures which, in spite of many precautions, are bound to happen. Replication is a strategy often used to introduce fault tolerance in the system to ensure successful execution of the job, even when some of the computational resources fail. Though replication incurs a heavy cost, a selective degree of replication can offer a good compromise between the performance and the cost. This chapter proposes a co-scheduler that can be integrated with main scheduler for the execution of the jobs submitted to computational Grid. The main scheduler may have any performance optimization criteria; the integration of co-scheduler will be an added advantage towards fault tolerance. The chapter evaluates the performance of the co-scheduler with the main scheduler designed to minimize the turnaround time of a modular job by introducing module replication to counter the effects of node failures in a Grid. Simulation study reveals that the model works well under various conditions resulting in a graceful degradation of the scheduler’s performance with improving the overall reliability offered to the job.


Energies ◽  
2018 ◽  
Vol 11 (12) ◽  
pp. 3448 ◽  
Author(s):  
Simone Pedrazzi ◽  
Giulio Allesina ◽  
Alberto Muscio

This article shows the influence of an anti-fouling nano-coating on the electrical energy produced by a string of photovoltaic modules. The coating effect was evaluated comparing the energy produced by two strings of the same PV power plant: one of them was cleaned and the other was cleaned and treated with the coating before the monitoring campaign. The PV plant is located in Modena, north of Italy. A first monitoring campaign of nine days after the treatment shows that the treatment increases the energy production on the PV arrays by about 1.82%. Results indicate that the increase is higher during sunny days with respect to cloudy days. A second monitoring campaign of the same length, but five months later, shows that the energy gain decreases from 1.82% to 0.69% due to the aging of the coating, which is guaranteed for one year by the manufacturer. A technical-economical analysis demonstrates that at the moment the yearly economic gain is 0.43 € per square meter of panel and the cost of the treatment is about 1 € per square meter. However, large scale diffusion can reduce the production cost and thus increase the affordability of the coating.


2019 ◽  
Vol 141 (11) ◽  
Author(s):  
Ayush Raina ◽  
Christopher McComb ◽  
Jonathan Cagan

Abstract Humans as designers have quite versatile problem-solving strategies. Computer agents on the other hand can access large-scale computational resources to solve certain design problems. Hence, if agents can learn from human behavior, a synergetic human-agent problem-solving team can be created. This paper presents an approach to extract human design strategies and implicit rules, purely from historical human data, and use that for design generation. A two-step framework that learns to imitate human design strategies from observation is proposed and implemented. This framework makes use of deep learning constructs to learn to generate designs without any explicit information about objective and performance metrics. The framework is designed to interact with the problem through a visual interface as humans did when solving the problem. It is trained to imitate a set of human designers by observing their design state sequences without inducing problem-specific modeling bias or extra information about the problem. Furthermore, an end-to-end agent is developed that uses this deep learning framework as its core in conjunction with image processing to map pixel-to-design moves as a mechanism to generate designs. Finally, the designs generated by a computational team of these agents are then compared with actual human data for teams solving a truss design problem. Results demonstrate that these agents are able to create feasible and efficient truss designs without guidance, showing that this methodology allows agents to learn effective design strategies.


2021 ◽  
Vol 4 ◽  
Author(s):  
Andreas Zeiselmair ◽  
Bernd Steinkopf ◽  
Ulrich Gallersdörfer ◽  
Alexander Bogensperger ◽  
Florian Matthes

The energy system is becoming increasingly decentralized. This development requires integrating and coordinating a rising number of actors and small units in a complex system. Blockchain could provide a base infrastructure for new tools and platforms that address these tasks in various aspects—ranging from dispatch optimization or dynamic load adaption to (local) market mechanisms. Many of these applications are currently in development and subject to research projects. In decentralized energy markets especially, the optimized allocation of energy products demands complex computation. Combining these with distributed ledger technologies leads to bottlenecks and challenges regarding privacy requirements and performance due to limited storage and computational resources. Verifiable computation techniques promise a solution to these issues. This paper presents an overview of verifiable computation technologies, including trusted oracles, zkSNARKs, and multi-party computation. We further analyze their application in blockchain environments with a focus on energy-related applications. Applied to a distinct optimization problem of renewable energy certificates, we have evaluated these solution approaches and finally demonstrate an implementation of a Simplex-Optimization using zkSNARKs as a case study. We conclude with an assessment of the applicability of the described verifiable computation techniques and address limitations for large-scale deployment, followed by an outlook on current development trends.


Author(s):  
Takuma Oide ◽  
Akiko Takahashi ◽  
Atsushi Takeda ◽  
Takuo Suganuma

To provide the stable and continuous network services in cases of large-scale natural disasters, computers must use extremely limited network and computational resources effectively without imposing additional administrative burdens. The authors propose a P2P Information Sharing System for affected areas based on our proposed structured P2P network called the Well-distribution Algorithm for an Overlay Network (WAON). By applying the WAON framework, the system configures the P2P network autonomously using the remaining nodes, and achieves load balancing dynamically without additional network maintenance costs. Therefore, the system can perform well in an unstable network environment such as that during a disaster. The authors designed and implemented the system and evaluated its overall system behavior and performance in simulations assuming the real scenario of the Great East Japan Earthquake. Results show that the authors' system can distribute safety confirmation information of victims efficiently among the remaining nodes.


Author(s):  
Jannatul Ferdows Nipa ◽  
Md. Hasan Tarek Mondal ◽  
Md Atikul Islam

A straw chopper is a mechanical device used to uniformly chop fodder into small pieces to mix it together with other grass and then feed it to livestock. The objective of this research was to design and develop an animal fodder chopping machine to be utilised by dairy farmers within their purchase range. The drawing of these machine parts was undertaken in AutoCAD software and the construction was performed in a local workshop. After development of this machine, performance tests were carried out on a farm. The chopping machine tests were carried out with commonly grown fodder (namely: straw, grass, and maize) in Bangladesh. The performance evaluation of the developed machine was carried out in terms of the chopping efficiency, machine productivity, and energy consumption. The economic analysis of the straw chopping machine was assessed by indicating the cost effectiveness to the poor farmers. Analysis of the data in regard to chopping efficiency and machine productivity varied from 93 to 96% and from 192 to 600 kg×h<sup>–1</sup>, respectively. The energy consumption during the chopping process ranged between 0.0025 and 0.01 kWh for the different types of fodder. The break-even point of the fodder chopping machine was 3 793 kg of cut straw and the payback period was within one year depending on the use.


Sign in / Sign up

Export Citation Format

Share Document