scholarly journals Optimizing Provisioning of LCG Software Stacks with Kubernetes

2020 ◽  
Vol 245 ◽  
pp. 05030
Author(s):  
Richard Bachmann ◽  
Gerardo Ganis ◽  
Dmitri Konstantinov ◽  
Ivan Razumov ◽  
Johannes Martin Heinz

The building, testing and deployment of coherent large software stacks is very challenging, in particular when they consist of the diverse set of packages required by the LHC*** experiments, the CERN Beams department and data analysis services such as SWAN. These software stacks comprise a large number of packages (Monte Carlo generators, machine learning tools, Python modules, HEP**** specific software), all available for several compilers, operating systems and hardware architectures. Along with several releases per year, development builds are provided each night to allow for quick updates and testing of development versions of packages such as ROOT, Geant4, etc. It also provides the possibility to test new compilers and new configurations. Timely provisioning of these development and release stacks requires a large amount of computing resources. A dedicated infrastructure, based on the Jenkins continuous integration system, has been developed to this purpose. Resources are taken from the CERN OpenStack cloud; Puppet configurations are used to control the environment on virtual machines, which are either used directly as resource nodes or as hosts for Docker containers. Containers are used more and more to optimize the usage of our resources and ensure a consistent build environment while providing quick access to new Linux flavours and specific configurations. In order to add build resources on demand more easily, we investigated the integration of a CERN provided Kubernetes cluster into the existing infrastructure. In this contribution we present the status of this prototype, focusing on the new challenges faced, such as the integration of these ephemeral build nodes into CERN’s IT infrastructure, job priority control, and debugging of job failures.

Currently, resources in data centers are used extremely inefficiently. Storage systems are loaded on average about 25%, and servers and network resources - up to 30%. After implementing virtualization, the resource load level in a well-managed server environment increases by 30% to 90%. Virtualization undoubtedly provides many advantages in an infrastructure. One of the most important is the ability to easily create and manage backups of virtual machines, as well as quick recovery if necessary after disasters or accidents. Recovery time is many times faster than when applications and the operating system are hosted on a real server, while the loss of information with proper management is from zero to minimal. The available weekly and daily backups in Proxmox VE are not always flexible enough to properly organize backups in an IT infrastructure. In most companies and organizations there are virtual and real servers that play a significant role, but the data in them, as well as operating systems change very rarely. With existing methods, weekly backups need to be set up to ensure the reliability of the data and to recover quickly in the event of a disaster or accident. The paper aims to research and propose approaches which can extend the bult-in backup process by adding monthly backups for Proxmox VE. The research discusses the optimization of the process of creating backups to reduce network traffic between nodes and storage, as well as optimizing stored storage data.


Author(s):  
Ganesh Chandra Deka ◽  
Prashanta Kumar Das

Virtualization technology enables organizations to take the benefit of different services, operating systems, and softwares without increasing their IT infrastructure liabilities. Virtualization software partitions the physical servers in multiple Virtual Machines (VM) where each VM represents a complete system with the complete computing environment. This chapter discusses the installation and deployment procedures of VMs using Xen, KVM, and VMware hypervisor. Microsoft Hyper-v is introduced at the end of the chapter.


2019 ◽  
Vol 214 ◽  
pp. 05020
Author(s):  
Javier Cervantes Villanueva ◽  
Gerardo Ganis ◽  
Dmitri Konstantinov ◽  
Grigorii Latyshev ◽  
Pere Mato Vila ◽  
...  

Building, testing and deploying of coherent large software stacks is very challenging, in particular when they consist of the diverse set of packages required by the LHC experiments, the CERN Beams Department and data analysis services such as SWAN. These software stacks include several packages (Grid middleware, Monte Carlo generators, Machine Learning tools, Python modules) all available for a large number of compilers, operating systems and hardware architectures. To address this challenge, we developed an infrastructure around a tool called lcgcmake. Dedicated modules are responsible for building the packages, con-trolling the dependencies in a reliable and scalable way. The distribution relies on a robust and automatic system, responsible for building and testing the packages, installing them on CernVM-FS and packaging the binaries in RPMs and tarballs. This system is orchestrated through Jenkins on build machines provided by the CERN Openstack facility. The results are published through user-friendly web pages. In this paper we will present an overview of these infrastructure tools and policies. We also discuss the role of this effort within the HEP Software Foundation (HSF). Finally we will discuss the evolution of the infrastructure towards container (Docker) technologies and the future directions and challenges of the project.


Author(s):  
Udayon Misra

The concluding chapter takes up what it sees to be some of the major unresolved issues of Partition politics. While it tries to trace the roots of the violence centred around land in several areas of Assam, especially in the Bodo-inhabited region, it shows how issues such as the controversy over the cut-off year for immigrants to acquire citizenship are carry-overs from Partition days. Other major issues that are discussed include the status of Hindu refugees/displaced persons in the state, the National Register of Citizens, and the larger question of language and Assamese identity. It shows how with the new wave of immigrants being assimilated into the Assamese nationality, its transformation is underway and how this transformation itself throws up new challenges and equations.


Author(s):  
Valentin Tablan ◽  
Ian Roberts ◽  
Hamish Cunningham ◽  
Kalina Bontcheva

Cloud computing is increasingly being regarded as a key enabler of the ‘democratization of science’, because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research—GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost–benefit analysis and usage evaluation.


2020 ◽  
Vol 20 (2) ◽  
pp. 133-145 ◽  
Author(s):  
Hyeongho Choi ◽  
Euipyeong Lee

32,000 fire fighters from 451 fire departments in 41 prefectures were mobilized to support and assist fire extinguishing and lifesaving in the Hyogo Prefecture Nanbu Earthquake that occurred on January 17, 1995. Based on this experience, the emergency fire response team for disaster response (EFRT) was established by the Fire and Disaster Management Agency (FDMA) on June 30, 1995. When large scale disasters occur over wide areas, EFRTs in Japan are dispatched to the disaster places to assist fire fighting on demand or by the order of the commissioner of the FDMA. This study analyzed the background required for establishing the EFRT; the process and details of the legislation; the establishment of basic plans, organizations, and operation plans; and assistance dispatch along with the plan for receiving outside support; registration and the plan for reinforcing equipment; the status of training for preparing assistance dispatch; and activity results in order to provide basic information to prepare large scale disasters and establish coping policies in Korea.


2020 ◽  
Vol 14 (1) ◽  
pp. 1-22
Author(s):  
Muhammad Akrom Adabi ◽  
Neny Muthi'atul Awwaliyah

AbstractThe Qur’an, which has the status of a Muslim holy book, is experiencing "alienation" because it is considered unable to make practical contributions to various new challenges that arise. Al-Qur’an and Pancasila, which are the two important handles of Indonesian Muslims, are expected to not only keep up with the times. More than that, the al-Qur’an and Pancasila must really be able to fill the void and give an active role through its values, to bring the progress of Indonesia with a distinctive personality in the face of the Industrial 4.0 era. This paper tries to review the strengthening of Muslim Hub as a strategy in dealing with Industry 4.0 through contextualization of the values of the Koran and Pancasila. This study uses Max Weber's theory of Protestant ethics. In a book entitled The Protestant Ethics and Spirit of Capitalism, Weber has done a thorough analysis of the relationship between capitalism and religion. AbstrakAl-Qur’an dan Pancasila harus betul-betul mampu mengisi kekosongan dan memberi peran aktif melalui nilai-nilainya, untuk membawa kemajuan Indonesia dengan kepribadian yang khas dalam menghadapi era Industri 4.0. Tulisan ini mencoba mengulas seputar penguatan muslim hub sebagai strategi dalam menghadapi Industri 4.0 melalui kontekstualisasi nilai al-Qur’an dan Pancasila. Dalam penelitian ini ada dua bukti empiris yang pertama order monastic, dimana orang saleh ternyata juga memiliki prestasi yang gemilang dari sisi material. Kedua sekte protestan yang memiliki prestasi yang gemilang dalam fase awal munculnya kapitalisme modern. Penelitian ini menggunakan teori Max Weber tentang etika Protestan. Dalam buku yang berjudul The Protestan Ethics and Spirit of Capitalism, Weber telah melakukan analisa yang mendalam mengenai relasi kapitalisme dan keagamaan yang menunjukkan betapa agama memiliki pengaruh kuat dalam pembentukan karakter pemeluknya. Jika ditarik ke kajian yang lebih luas, maka ideologi memiliki peran kuat dalam mempengaruhi perilaku pengikutnya, baik ideologi keagamaan maupun ideologi kenegaraan. Kata Kunci: Kontekstualisasi, Al-Qur’an, Pancasila, Industri 4.0.


Author(s):  
Petr Zach ◽  
Martin Pokorný ◽  
Jiří Balej ◽  
Michal Šturma

A management of computer classroom is undoubtedly a difficult task for the administrator which has to prepare virtual operating systems for education. It is quite common that lectors need to edit the particular machine during the semester, and that is the case where the main problems can appear. The process of changes deployment is not just very time-consuming but during it a virtual machine inconsistency can appear. The main part of this paper focuses on system process diagrams and its pseudocode. At first, the machine is created on the remote server by lector or administrator. After a proper approval, the machine is able to be deployed. The lector then specifies the details about date, time and destinations of the virtual machine deployment. Once these details are approved, the virtual machine will be automatically deployed at the specified time. The automatic deployment includes also an initial configuration of the virtual machine at remote desktop and its post-install configuration (hostname, MAC address, etc.). Once all steps are completed, the process will be marked as succeed. We present an automatized solution which provides a possibility how to easily manage computer classroom with virtual operating systems. The proposed solution should deliver a greater flexibility, more reliability and faster deployment in comparison with the current solution used in our computer classroom. The proposal is also able to manipulate with already deployed machines for easy changes (e.g. software updates). The main advantage is the improvement of classroom management process automation.


Author(s):  
Jean K. Chalaby

As media globalization has progressed, transnational media have evolved, and this article contends that a new generation has emerged. The first that developed in the latter part of the twentieth century consists of cross-border TV networks and formats. The second is the rise of streaming platforms. During the first generation, the transnational remained a professional practice out of viewers’ reach. With the arrival of the second generation, the transnational has become an everyday mode of media consumption and interaction. Online entertainment services have altered the status of the transnational within TV culture, and what was once at the margins now sits at the core. This article theorizes the notion of the transnational before examining the first and second generations of cross-border media. Considering the advent of streaming, it divides the market into three spaces: subscription video on demand (SVoD), advertising video on demand (AVoD) and video sharing. This article demonstrates how transnational consumption makes SVoD platforms more cosmopolitan than cross-border TV networks. Turning to video-sharing platforms – YouTube in particular – it argues that in the history of TV culture this constitutes a shift in status of the transnational by turning a professional practice into a popular one performed by millions. Based on interviews, this article shows how international access lowers the threshold of economic viability for content creators, while users get involved in cross-border conversations through memetic videos and comments. It is no longer place but technology that determines the fate of stories and ideas, and internet delivery has loosened the ties between TV culture and national culture more than ever.


2021 ◽  
Author(s):  
Jérôme Benveniste ◽  
Salvatore Dinardo ◽  
Christopher Buchhaupt ◽  
Michele Scagliola ◽  
Marcello Passaro ◽  
...  

<p>The scope of this presentation is to feature and provide an update on the ESA G-POD/SARvatore family of altimetry services portfolio for the exploitation of CryoSat-2 and Sentinel-3 data from L1A (FBR) data products up to SAR/SARin Level-2 geophysical data products. At present, the following on-line & on-demand services compose the portfolio:</p><p>-       The SARvatore (SAR Versatile Altimetric TOolkit for Research & Exploitation) for CryoSat-2 and Sentinel-3 services developed by the Altimetry Team in the R&D division at ESA-ESRIN. These processor prototypes are versatile and allow the users to customize and adapt the processing at L1b & L2 according to their specific requirements by setting a list of configurable options. The scope is to provide users with specific processing options not available in the operational processing chains (e.g. range walk correction, stack sub-setting, extended receiving window, zero padding, high-posting rate and burst weighting at L1b & SAMOSA+, SAMOSA++ and ALES+ SAR retrackers at L2). AJoin & Share Forum (https://wiki.services.eoportal.org/tiki-custom_home.php) allows users to post questions and report issues. A data repository is also available to the Community to avoid the redundant reprocessing of already processed data (https://wiki.services.eoportal.org/tiki-index.php?page=SARvatore+Data+Repository&highlight=repository).</p><p>-       The TUDaBo SAR-RDSAR (Technical University Darmstadt – University Bonn SAR-Reduced SAR) for CryoSat-2 and Sentinel-3 service. It allows users to generate reduced SAR, unfocused SAR & LRMC data. Several configurable L1b & L2 processing options and retrackers (BMLE3, SINC2, TALES, SINCS) are available. The processor will be extended during an additional activity related to the ESA HYDROCOASTAL Project (https://www.satoc.eu/projects/hydrocoastal/) to account in the open ocean for the vertical motion of the wave particles (VMWP) in unfocused SAR and in a simplified form of the fully focused SAR called here Low Resolution Range Cell Migration Correction-Focused (LRMC-F).  </p><p>-       The ALES+ SAR for CryoSat-2 and Sentinel-3 service. It allows users to process official L1b data and produces L2 NetCDF products by applying the empirical ALES+ SAR subwaveform retracker, including a dedicated SSB solution, developed by the Technische Universität München in the frame of the ESA Sea Level CCI (http://www.esa-sealevel-cci.org/) & BALTIC+ SEAL Projects (http://balticseal.eu/).</p><p>-       The Aresys Fully Focused SAR for CryoSat-2 service. Currently under development, it will provide the capability to produce CS-2 FF-SAR L1b products thanks to the Aresys 2D transformed frequency domain AREALT-FF1 processor prototype. Output products will also include geophysical corrections and threshold peak & ALES-like subwaveform retracker estimates.</p><p>The G-POD graphical interface allows users to select, in all the services, a geographical area of interest within the time-frame related to the L1A (FBR) & L1b data products availability in the service catalogue.  </p><p>After the task submission, users can follow, in real time, the status of the processing. The output data products are generated in standard NetCDF format, therefore being compatible with the multi-mission “Broadview Radar Altimetry Toolbox” (BRAT, http://www.altimetry.info) and typical tools.</p><p>Services are open, free of charge (supported by ESA) for worldwide scientific applications and available, after registration and activation (to be requested for each chosen service to [email protected]), at https://gpod.eo.esa.int.</p>


Sign in / Sign up

Export Citation Format

Share Document