scholarly journals Expand the Capabilities for Backups a Paas-Type Virtualization System

Currently, resources in data centers are used extremely inefficiently. Storage systems are loaded on average about 25%, and servers and network resources - up to 30%. After implementing virtualization, the resource load level in a well-managed server environment increases by 30% to 90%. Virtualization undoubtedly provides many advantages in an infrastructure. One of the most important is the ability to easily create and manage backups of virtual machines, as well as quick recovery if necessary after disasters or accidents. Recovery time is many times faster than when applications and the operating system are hosted on a real server, while the loss of information with proper management is from zero to minimal. The available weekly and daily backups in Proxmox VE are not always flexible enough to properly organize backups in an IT infrastructure. In most companies and organizations there are virtual and real servers that play a significant role, but the data in them, as well as operating systems change very rarely. With existing methods, weekly backups need to be set up to ensure the reliability of the data and to recover quickly in the event of a disaster or accident. The paper aims to research and propose approaches which can extend the bult-in backup process by adding monthly backups for Proxmox VE. The research discusses the optimization of the process of creating backups to reduce network traffic between nodes and storage, as well as optimizing stored storage data.

2021 ◽  
Vol 1 ◽  
pp. 17-23
Author(s):  
Yu.M. Lysetskyi ◽  
◽  
S.V. Kozachenko ◽  

Every year the amount of generated data grows exponentially which entails an increase in both the number and capacity of data storage systems. The highest capacity is required for data storage systems that are used to store backups and archives, file storages with shared access, testing and development environments, virtual machine storages, corporate or public web services. To solve such tasks, nowadays manufacturers offer three types of storage systems: block and file storages which have already become a standard used for implementing IT infrastructures, and software-defined storage systems. They allow to create data storages on non-specialized equipment, such as a group of x86-64 server nodes managed by general-purpose operating systems. The main feature of software-defined data storages is the transfer of storage functions from the hardware level to the software level where these storage functions are defined not by physical features of the hardware but by the software selected for specific tasks solving. Today there are three main singled out technologies characterized by scalable architecture that allow to in-crease efficiency and storage volume through adding new nodes to a single pool: Ceph, DELL EMC VxFlex OS, HP StoreVirtual VSA. Software-defined data storages have the following advantages: fault tolerance, efficiency, flexibility and economy. Utilization of software-defined storages allows to increase efficiency of IT infrastructure and reduce its maintenance costs; to build a hybrid infrastructure that would allow to use internal and external cloud resources; to increase efficiency of both services and us-ers by providing reliable connection by using the most convenient devices; to build a portal as a single point of services and resources control.


2020 ◽  
Vol 245 ◽  
pp. 05030
Author(s):  
Richard Bachmann ◽  
Gerardo Ganis ◽  
Dmitri Konstantinov ◽  
Ivan Razumov ◽  
Johannes Martin Heinz

The building, testing and deployment of coherent large software stacks is very challenging, in particular when they consist of the diverse set of packages required by the LHC*** experiments, the CERN Beams department and data analysis services such as SWAN. These software stacks comprise a large number of packages (Monte Carlo generators, machine learning tools, Python modules, HEP**** specific software), all available for several compilers, operating systems and hardware architectures. Along with several releases per year, development builds are provided each night to allow for quick updates and testing of development versions of packages such as ROOT, Geant4, etc. It also provides the possibility to test new compilers and new configurations. Timely provisioning of these development and release stacks requires a large amount of computing resources. A dedicated infrastructure, based on the Jenkins continuous integration system, has been developed to this purpose. Resources are taken from the CERN OpenStack cloud; Puppet configurations are used to control the environment on virtual machines, which are either used directly as resource nodes or as hosts for Docker containers. Containers are used more and more to optimize the usage of our resources and ensure a consistent build environment while providing quick access to new Linux flavours and specific configurations. In order to add build resources on demand more easily, we investigated the integration of a CERN provided Kubernetes cluster into the existing infrastructure. In this contribution we present the status of this prototype, focusing on the new challenges faced, such as the integration of these ephemeral build nodes into CERN’s IT infrastructure, job priority control, and debugging of job failures.


Author(s):  
Ganesh Chandra Deka ◽  
Prashanta Kumar Das

Virtualization technology enables organizations to take the benefit of different services, operating systems, and softwares without increasing their IT infrastructure liabilities. Virtualization software partitions the physical servers in multiple Virtual Machines (VM) where each VM represents a complete system with the complete computing environment. This chapter discusses the installation and deployment procedures of VMs using Xen, KVM, and VMware hypervisor. Microsoft Hyper-v is introduced at the end of the chapter.


Author(s):  
JYOTHI GANIG B S ◽  
RAMESH NAMBURI

Data centers manage complex server environments, including physical and virtual machines, across a wide variety of platforms, and often in geographically dispersed locations. Information Technology managers are responsible for ensuring that servers in these increasingly complex environments are properly configured and monitored throughout the IT life cycle. They also face challenges managing the physical and virtual environments and the fact that we must centralize, optimize and maintain both. If the variation and complexity can be taken out of a process to make it more consistent, it can be automated. Through the use of virtual provisioning software, provisioning and re-purposing of infrastructure will become increasingly automatic. Staff will physically rack once, cable once, and thereafter (remotely) reconfigure repeatedly, effortlessly, as needed. An automatic infrastructure will rapidly change which servers are running what software and how those servers are connected to network and storage. It will re-purpose machines according to the real-time demands of the business. It will enable capacity to be "dialed up" or "dialed down". And it will bring up a failed server on new hardware, with the same network and storage access and addressing, within minutes. All without needing to make physical machine, cable, LAN connection or SAN access changes.


2017 ◽  
Vol 26 (1) ◽  
pp. 113-128
Author(s):  
Gamal Eldin I. Selim ◽  
Mohamed A. El-Rashidy ◽  
Nawal A. El-Fishawy

Informatics ◽  
2021 ◽  
Vol 8 (1) ◽  
pp. 13
Author(s):  
Konstantinos Papadakis-Vlachopapadopoulos ◽  
Ioannis Dimolitsas ◽  
Dimitrios Dechouniotis ◽  
Eirini Eleni Tsiropoulou ◽  
Ioanna Roussaki ◽  
...  

With the advent of 5G verticals and the Internet of Things paradigm, Edge Computing has emerged as the most dominant service delivery architecture, placing augmented computing resources in the proximity of end users. The resource orchestration of edge clouds relies on the concept of network slicing, which provides logically isolated computing and network resources. However, though there is significant progress on the automation of the resource orchestration within a single cloud or edge cloud datacenter, the orchestration of multi-domain infrastructure or multi-administrative domain is still an open challenge. Towards exploiting the network service marketplace at its full capacity, while being aligned with ETSI Network Function Virtualization architecture, this article proposes a novel Blockchain-based service orchestrator that leverages the automation capabilities of smart contracts to establish cross-service communication between network slices of different tenants. In particular, we introduce a multi-tier architecture of a Blockchain-based network marketplace, and design the lifecycle of the cross-service orchestration. For the evaluation of the proposed approach, we set up cross-service communication in an edge cloud and we demonstrate that the orchestration overhead is less than other cross-service solutions.


2021 ◽  
Vol 1088 (1) ◽  
pp. 012076
Author(s):  
Titi Andriani ◽  
Muhammad Hidayatullah ◽  
Dekky Saputra ◽  
Shinta Esabella ◽  
G Gunawan

2015 ◽  
Vol 19 (4) ◽  
pp. 537-540 ◽  
Author(s):  
Bin Wang ◽  
Xiaolin Chang ◽  
Jiqiang Liu

2014 ◽  
Vol 1046 ◽  
pp. 508-511
Author(s):  
Jian Rong Zhu ◽  
Yi Zhuang ◽  
Jing Li ◽  
Wei Zhu

How to reduce energy consumption while improving utility of datacenter is one of the key technologies in the cloud computing environment. In this paper, we use energy consumption and utility of data center as objective functions to set up a virtual machine scheduling model based on multi-objective optimization VMSA-MOP, and design a virtual machine scheduling algorithm based on NSGA-2 to solve the model. Experimental results show that compared with other virtual machine scheduling algorithms, our algorithm can obtain relatively optimal scheduling results.


Materials ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2077
Author(s):  
Oliver Zeman ◽  
Michael Schwenn ◽  
Martin Granig ◽  
Konrad Bergmeister

The assessment of already installed anchorages for a possible exceeding of the service load level is a question that is gaining more and more importance, especially in building maintenance. Bonded anchors are of particular interest here, as the detection of a capacity reduction or load exceedance can cause damage to the concrete-bonded mortar behavior. This article investigates the extent to which ultrasonic methods can be used to make a prediction about the condition of anchorages in concrete and about their load history. A promising innovative assessment method has been developed. The challenges in carrying out the experimental investigations are the arrangement of the transducers, the design of the test set-up and the applicability of direct, indirect or semidirect ultrasonic transmission. The experimental investigations carried out on a test concrete mix and a bonded anchor system show that damage to the concrete structure can be detected by means of ultrasound. The results indicate the formation of cracks and therefore a weakening of the response determined by means of direct, indirect and semidirect ultrasonic transmission. However, for application under non-laboratory conditions and on anchors with unknown load history, the calibration with a reference anchor and the identification of the maximum load is required. This enables a referencing of the other loaded anchors to the unloaded conditions and allows an estimation of the load history of individual anchors.


Sign in / Sign up

Export Citation Format

Share Document