scholarly journals Transparent Integration of Opportunistic Resources into the WLCG Compute Infrastructure

2021 ◽  
Vol 251 ◽  
pp. 02039
Author(s):  
Michael Böhler ◽  
René Caspart ◽  
Max Fischer ◽  
Oliver Freyermuth ◽  
Manuel Giffels ◽  
...  

The inclusion of opportunistic resources, for example from High Performance Computing (HPC) centers or cloud providers, is an important contribution to bridging the gap between existing resources and future needs by the LHC collaborations, especially for the HL-LHC era. However, the integration of these resources poses new challenges and often needs to happen in a highly dynamic manner. To enable an effective and lightweight integration of these resources, the tools COBalD and TARDIS are developed at KIT. In this contribution we report on the infrastructure we use to dynamically offer opportunistic resources to collaborations in the World Wide LHC Computing Grid (WLCG). The core components are COBalD/TARDIS, HTCondor, CVMFS and modern virtualization technology. The challenging task of managing the opportunistic resources is performed by COBalD/TARDIS. We showcase the challenges, employed solutions and experiences gained with the provisioning of opportunistic resources from several resource providers like university clusters, HPC centers and cloud setups in a multi VO environment. This work can serve as a blueprint for approaching the provisioning of resources from other resource providers.

2020 ◽  
Vol 71 (3) ◽  
pp. 263-267
Author(s):  
М. Serik ◽  
◽  
G. Zh. Yerlanova ◽  

At present, along with the dynamic development of computer technology in the world, the most effective ways of solving problems of practical importance are being considered. High performance computing takes the lead in this. Therefore, the development of modern society is closely related to the training of experienced, modern specialists in the field of information technology. This, in turn, depends on the inclusion of new courses in the curriculum and full coverage of these issues in the content of the taught courses. This article analyzes the courses on high performance computing, taught at experimental bases and abroad, on the basis of this, the topics of the special course and the content recommended for implementation in the educational process are determined. During the training, the competencies of students in high performance computing were identified.


Author(s):  
Peter V Coveney

We introduce a definition of Grid computing which is adhered to throughout this Theme Issue. We compare the evolution of the World Wide Web with current aspirations for Grid computing and indicate areas that need further research and development before a generally usable Grid infrastructure becomes available. We discuss work that has been done in order to make scientific Grid computing a viable proposition, including the building of Grids, middleware developments, computational steering and visualization. We review science that has been enabled by contemporary computational Grids, and associated progress made through the widening availability of high performance computing.


Author(s):  
Mohammad Samadi Gharajeh

Grid systems and cloud servers are two distributed networks that deliver computing resources (e.g., file storages) to users' services via a large and often global network of computers. Virtualization technology can enhance the efficiency of these networks by dedicating the available resources to multiple execution environments. This chapter describes applications of virtualization technology in grid systems and cloud servers. It presents different aspects of virtualized networks in systematic and teaching issues. Virtual machine abstraction virtualizes high-performance computing environments to increase the service quality. Besides, grid virtualization engine and virtual clusters are used in grid systems to accomplish users' services in virtualized environments, efficiently. The chapter, also, explains various virtualization technologies in cloud severs. The evaluation results analyze performance rate of the high-performance computing and virtualized grid systems in terms of bandwidth, latency, number of nodes, and throughput.


ChemInform ◽  
2008 ◽  
Vol 39 (29) ◽  
Author(s):  
Drew Bullard ◽  
Alberto Gobbi ◽  
Matthew A. Lardy ◽  
Charles Perkins ◽  
Zach Little

Author(s):  
Al Geist ◽  
Daniel A Reed

Commodity clusters revolutionized high-performance computing when they first appeared two decades ago. As scale and complexity have grown, new challenges in reliability and systemic resilience, energy efficiency and optimization and software complexity have emerged that suggest the need for re-evaluation of current approaches. This paper reviews the state of the art and reflects on some of the challenges likely to be faced when building trans-petascale computing systems, using insights and perspectives drawn from operational experience and community debates.


Author(s):  
Geetha J. ◽  
Uday Bhaskar N ◽  
Chenna Reddy P.

Data intensive systems aim to efficiently process “big” data. Several data processing engines have evolved over past decade. These data processing engines are modeled around the MapReduce paradigm. This article explores Hadoop's MapReduce engine and propose techniques to obtain a higher level of optimization by borrowing concepts from the world of High Performance Computing. Consequently, power consumed and heat generated is lowered. This article designs a system with a pipelined dataflow in contrast to the existing unregulated “bursty” flow of network traffic, the ability to carry out both Map and Reduce tasks in parallel, and a system which incorporates modern high-performance computing concepts using Remote Direct Memory Access (RDMA). To establish the claim of an increased performance measure of the proposed system, the authors provide an algorithm for RoCE enabled MapReduce and a mathematical derivation contrasting the runtime of vanilla Hadoop. This article proves mathematically, that the proposed system functions 1.67 times faster than the vanilla version of Hadoop.


Author(s):  
А.С. Антонов ◽  
И.В. Афанасьев ◽  
Вл.В. Воеводин

В данной статье представлен обзор современного состояния суперкомпьютерной техники. Обзор сделан с разных точек зрения — начиная от особенностей построения современных вычислительных устройств до особенностей архитектуры больших суперкомпьютерных комплексов. В данный обзор вошли описания самых мощных суперкомпьютеров мира и России по состоянию на начало 2021 г., а также некоторых менее мощных систем, интересных с других точек зрения. Также делается акцент на тенденциях развития суперкомпьютерной отрасли и описываются наиболее известные проекты построения будущих экзафлопсных суперкомпьютеров. This paper provides an overview of the current state of supercomputer technology. The review is done from different points of view — from the construction features of modern computing devices to the features of the architecture of large supercomputer complexes. This review includes descriptions of the most powerful supercomputers in the world and Russia since the early of 2021 as well as some less powerful systems that are interesting from other points of view. It also focuses on the development trends of the supercomputer industry and describes the most famous projects for building future exascale supercomputers.


Sign in / Sign up

Export Citation Format

Share Document