scholarly journals System Performance and Cost Modelling in LHC computing

2019 ◽  
Vol 214 ◽  
pp. 03019
Author(s):  
Catherine Biscarat ◽  
Tommaso Boccali ◽  
Daniele Bonacorsi ◽  
Concezio Bozzi ◽  
Davide Costanzo ◽  
...  

The increase in the scale of LHC computing expected for Run 3 and even more so for Run 4 (HL-LHC) over the next ten years will certainly require radical changes to the computing models and the data processing of the LHC experiments. Translating the requirements of the physics programmes into computing resource needs is a complicated process and subject to significant uncertainties. For this reason, WLCG has established a working group to develop methodologies and tools intended tocharacterise the LHC workloads, better understand their interaction with the computing infrastructure, calculate their cost in terms of resources and expenditure and assist experiments, sites and the WLCG project in the evaluation of their future choices. This working group started in November 2017 and has about 30 active participants representing experiments and sites. In this contribution we expose the activities, the results achieved and the future directions.

2020 ◽  
Vol 245 ◽  
pp. 03014
Author(s):  
Catherine Biscarat ◽  
Tommaso Boccali ◽  
Daniele Bonacorsi ◽  
Concezio Bozzi ◽  
Davide Costanzo ◽  
...  

The increase in the scale of LHC computing during Run 3 and Run 4 (HL-LHC) will certainly require radical changes to the computing models and the data processing of the LHC experiments. The working group established by WLCG and the HEP Software Foundation to investigate all aspects of the cost of computing and how to optimise them has continued producing results and improving our understanding of this process. In particular, experiments have developed more sophisticated ways to calculate their resource needs, we have a much more detailed process to calculate infrastructure costs. This includes studies on the impact of HPC and GPU based resources on meeting the computing demands. We have also developed and perfected tools to quantitatively study the performance of experiments workloads and we are actively collaborating with other activities related to data access, benchmarking and technology cost evolution. In this contribution we expose our recent developments and results and outline the directions of future work.


2020 ◽  
Vol 245 ◽  
pp. 04027
Author(s):  
X. Espinal ◽  
S. Jezequel ◽  
M. Schulz ◽  
A. Sciabà ◽  
I. Vukotic ◽  
...  

HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical as economical. In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account the boundary conditions given by our community.Several of these scenarios have been evaluated quantitatively, such as the Data Lake model and incremental improvements of the current computing model with respect to resource needs, costs and operational complexity.To better understand these models in depth, analysis of traces of current data accesses and simulations of the impact of new concepts have been carried out. In parallel, evaluations of the required technologies took place. These were done in testbed and production environments at small and large scale.We will give an overview of the activities and results of the working group, describe the models and summarise the results of the technology evaluation focusing on the impact of storage consolidation in the form of Data Lakes, where the use of streaming caches has emerged as a successful approach to reduce the impact of latency and bandwidth limitation.We will describe the experience and evaluation of these approaches in different environments and usage scenarios. In addition we will present the results of the analysis and modelling efforts based on data access traces of the experiments.


Author(s):  
Bill Karakostas

To improve the overall impact of the Internet of Things (IoT), intelligent capabilities must be developed at the edge of the IoT ‘Cloud.' ‘Smart' IoT objects must not only communicate with their environment, but also use embedded knowledge to interpret signals, and by making inferences augment their knowledge of their own state and that of their environment. Thus, intelligent IoT objects must improve their capabilities to make autonomous decisions without reliance to external computing infrastructure. In this chapter, we illustrate the concept of smart autonomous logistic objects with a proof of concept prototype built using an embedded version of the Prolog language, running on a Raspberry Pi credit-card-sized single-board computer to which an RFID reader is attached. The intelligent object is combining the RFID readings from its environment with embedded knowledge to infer new knowledge about its status. We test the system performance in a simulated environment consisting of logistics objects.


Author(s):  
J. Abson ◽  
A. Prall ◽  
I. D. P. Wootton

This paper completes the description of the Phoenix system by outlining the additional programs necessary to maintain the data files in a satisfactory condition and prevent them from becoming overfilled. The standards of training required by the operating staff are discussed and an assessment is made of the system performance in terms of cost/benefit. This was achieved by observing the time spent by staff during a period when the throughput of work was accurately measured. From these figures it is possible to estimate the needs of another laboratory. Finally, the continued extension of the computer facilities into other pathology disciplines and the provision of terminals in the hospital is described.


2019 ◽  
Vol 8 (1) ◽  
pp. 48-52
Author(s):  
ELVIANNA ◽  
Nurul Saepul ◽  
Doni Kristianto

This thesis report is prepared based on the results of the analysis of the ongoing data collection system as well as the results of designing a new system at PT. SINAR MUSTIKA BINTAN SPBU Km 19 East Bintan. The results of the analysis show that manual data processing causes several problems so that the system performance becomes less efficient. To fix the deficiencies that exist in the system that is being implemented, a new system is designed to increase work efficiency. The data processing application developed can help solve problems that have existed before. This application was built using Borland Delphi 7.0. With this new system and application, it is hoped that the work process will be more efficient and can be improved.


Author(s):  
Naushaba Degani ◽  
Sharon Gushue ◽  
Alex Yurkiewich ◽  
Emmalin Buajitti ◽  
Matthew Kumar ◽  
...  

IntroductionWe report on key performance indicators to highlight quality and variation in health care. Given Ontario’s diverse geography, we have prioritized improving measurement across the rural-urban continuum. This will improve our ability to discern the impact of geography on health care and health status to inform planning and decision making. Objectives and ApproachBuilding on previous work to advance measurement of equity in health care, we struck a technical working group of experts to review methods for stratifying health system performance data by geographic location in the Ontario context. These methods were applied to a set of key performance indicators. The working group’s review of the results of this analysis will lead to recommendations for the best method to refine and standardize how geographic location is measured and stratified. This will improve our ability to discern the impact of geography on health system performance and health status for our suite of public-reporting products. ResultsThe technical working group identified three methodologies for consideration that used linked postal code data: Population Centre (POPCTR), Statistical Area Classification (SAC) and a hybrid POPCTR/SAC methodology. These methods were tested against a set of key performance indicators across dimensions of quality including timeliness, effectiveness, population health and health outcomes. The results show that, in the health system performance dimensions of effectiveness and timeliness, as well as for a subset of health outcomes, there is variation in performance across the urban-rural continuum, though not always in a linear way. This may reflect differences in health care access, health risk factors, sociodemographic or socioeconomic characteristics across the urban-rural continuum. More definitive conclusions and recommendations will be available when the working group meets to review the results. Conclusion/ImplicationsIdentifying a robust methodology for measuring performance across geographic locations will improve our ability to discern the impact of geography on health care including where geography may impact access and effectiveness of services as well as health outcomes. This information will enable better health system planning and decision-making.


Sign in / Sign up

Export Citation Format

Share Document