scholarly journals An Architecture Model for a Distributed Virtualization System

2019 ◽  
Vol 19 (2) ◽  
pp. e17 ◽  
Author(s):  
Pablo Andrés Pessolani

The Thesis is about an architecture model for a Distributed Virtualization System, which could expand a virtual execution environment from a single physical machine to several nodes of a cluster. With current virtualization technologies, computing power and resource usage of Virtual Machines (or Containers) are limited to the physical machine where they run. To deliver high levels of performance and scalability, cloud applications are usually partitioned in several Virtual Machines (or Containers) located on different nodes of a virtualization cluster. Developers often use that processing model because the same instance of the operating system is not available on each node where their components run. The proposed architecture model is suitable for new trends in software development because it is inherently distributed. It combines and integrates Virtualization and Distributed Operating Systems technologies with the benefits of both worlds, providing the same isolated instance of a Virtual Operating System on each cluster node. Although it requires the introduction of changes in existing operating systems, thousands of legacy applications would not require modifications to obtain their benefits. A Distributed Virtualization System is suitable to deliver high-performance cloud services with provider-class features, such as high-availability, replication, migration, and load balancing. Furthermore, it is able to concurrently run several isolated instances of different guest Virtual Operating Systems, allocating a subset of nodes for each instance and sharing nodes between them. Currently, a prototype is running on a cluster of commodity hardware provided with two kinds of Virtual Operating Systems tailored for internet services (web server) as a proof of concept.

2019 ◽  
Vol 8 (10) ◽  
pp. 427
Author(s):  
Li ◽  
Wang ◽  
Guan ◽  
Xie ◽  
Huang ◽  
...  

With the diversification of terminal equipment and operating systems, higher requirements are placed on the rendering performance of maps. The traditional map rendering engine relies on the corresponding operating system graphics library, and there are problems such as the inability to cross the operating system, low rendering performance, and inconsistent rendering style. With the development of hardware, graphics processing unit (GPU) appears in various platforms. How to use GPU hardware to improve map rendering performance has become a critical challenge. In order to address the above problems, this study proposes a cross-platform and high-performance map rendering (Graphics Library engine, GL engine), which uses mask drawing technology and texture dictionary text rendering technology. It can be used on different hardware platforms and different operating systems based on the OpenGL graphics library. The high-performance map rendering engine maintains a consistent map rendering style on different platforms. The results of the benchmark experiments show that the performance of GL engine is 1.75 times and 1.54 times better than the general map rendering engine in the iOS system and in the Android system, respectively, and the rendering performance for vector tiles is 11.89 times and 9.52 times better than rendering in the Mapbox in the iOS system and in the Android system, respectively.


Author(s):  
А.В. Баранов ◽  
Е.А. Киселёв

Организация облачных сервисов для высокопроизводительных вычислений затруднена, во-первых, по причине высоких накладных расходов на виртуализацию, во-вторых, из-за специфики систем управления заданиями и ресурсами в научных суперкомпьютерных центрах. В настоящей работе рассмотрен подход к построению облачных сервисов видов PaaS и SaaS, основанных на совместном функционировании облачной платформы Proxmox VE и системы управления прохождением параллельных заданий, применяемой в качестве менеджера ресурсов в Межведомственном суперкомпьютерном центре РАН. Purpose. The purpose of this paper is to develop methods and technologies for building high-performance computing cloud services in scientific supercomputer centers. Methodology.To build a cloud environment for high-performance scientific calculations (HPC), the corresponding three-level model and the method of combining flows of supercomputer tasks of various types were applied. Results.A high-level HPC cloud services technology based on the free Proxmox VE software platform has been developed. The Proxmox VE platform has been integrated with the domestic supercomputer job management system called SUPPZ. Experimental estimates of the overheads introduced in the high-performance computing process by the Proxmox components are obtained. Findings.An approach to the integration a supercomputer job management system and a virtualization platform is proposed. The presented approach is based on the representation of the supercomputer jobs as virtual machines or containers. Using the Proxmox VE platform as an example, the influence of a virtual environment on the execution time of parallel programs is investigated experimentally. The possibility of applying the proposed approach to building cloud services of the PaaS and SaaS type in scientific supercomputing centers of collective use is substantiated for a class of applications for which the overhead costs introduced by the Proxmox components are acceptable.


Author(s):  
Frederick M. Proctor ◽  
Justin R. Hibbits

General-purpose computers are increasingly being used for serious control applications, due to their prevalence, low cost and high performance. Real-time operating systems are available for PCs that overcome the nondeterminism inherent in desktop operating systems. Depending on the timing requirements, however, many users can get by with a non-real-time operating system. This paper discusses timing techniques applicable to non-real-time operating systems, using Linux as an example, and compares them with the performance that can be obtained with true real-time OSes.


Author(s):  
Ram Prasad Patnaik ◽  
Dambaru Dhara Nahak

Virtualization is a technology that transforms today’s powerful computer hardware, which was designed to run a single operating system and a single application, to run multiple virtual machine having independent operating system. Many times, we observes that the server resources been underutilized. Virtualization allows us to efficiently utilize the available resources on physical machine. In virtualization environment, different virtualized machines can have different host operating system (i.e. different versions windows, Linux, Solaris etc). The most important concept to understand in virtualization is that, the virtual machines operating systems are independent from physical server operating system. This paper is an attempt to illustrate and appreciate the concept of virtualization and its implementation by using a live case study which we have implemented for one of our leading ETL tool development for a client. Case study elaborates the implementation details about Virtualized DB Clustering and Server Consolidation.


Data mining is a lively process used in many leading technologies of this information era. Eclat growth is one of the best performance data mining algorithms. This work is indented to create a suave interface for Eclat growth algorithm to run in multi-core processor-based cloud computing environments. Recent improvements in processor manufacturing technology make it possible to create multi-core high performance Central Processing Units (CPUs) and Graphics Processing Units (GPUs). Many cloud services are already providing accessibility to these high-power processor virtual machines. The process of blending these technologies with Eclat Growth is proposed here in the name of “Multi-core Processing Cloud Eclat Growth” (MPCEG) to achieve higher processing speeds without compromising the standard data mining metrics such as Accuracy, Precision, Recall and F1-Score. New procedures for Cloud Parallel Processing, GPU Utilization, Annihilation of floating point arithmetic errors by fixed point replacement in GPUs and Hierarchical offloading aggregation are introduced in the construction process of proposed MPCEG


Interprocess Communication (IPC) is used by the cooperating processes for communication and synchronization. With the advent of Distributed Systems and Microkernel Operating systems, IPC has been used for designing the system for cooperation. This raised the requirements for improving the communication and synchronization for the better performance of the system. Here, a mechanism of synchronization between the processes to reduce the waiting time of process using POSIX (Portable Operating System Interface) threads has been proposed to perform and synchronize the given task.


2003 ◽  
Vol 13 (02) ◽  
pp. 95-122 ◽  
Author(s):  
GEOFFROY VALLÉE ◽  
RENAUD LOTTIAUX ◽  
LOUIS RILLING ◽  
JEAN-YVES BERTHOU ◽  
IVAN DUTKA MALHEN ◽  
...  

In this paper, we present fundamental mechanisms for global process and memory management in an efficient single system image cluster operating system designed to execute workloads composed of high performance sequential and parallel applications. Their implementation in Kerrighed, our proposed distributed operating system, is composed of a set of Linux modules and a patch of less than 200 lines of code to the Linux kernel. Kerrighed is a unique single system image cluster operating system providing the standard Unix interface as well as distributed OS mechanisms such as load balancing on all cluster nodes. Our support for standard Unix interface includes support for multi-threaded applications and a checkpointing facility for both sequential and shared memory parallel applications. We present an experimental evaluation of the Kerrighed system and demonstrate the feasibility of the single system image approach at the kernel level.


Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Roberto Rodriguez-Zurrunero ◽  
Ramiro Utrilla ◽  
Elena Romero ◽  
Alvaro Araujo

Wireless Sensor Networks (WSNs) are a growing research area as a large of number portable devices are being developed. This fact makes operating systems (OS) useful to homogenize the development of these devices, to reduce design times, and to provide tools for developing complex applications. This work presents an operating system scheduler for resource-constraint wireless devices, which adapts the tasks scheduling in changing environments. The proposed adaptive scheduler allows dynamically delaying the execution of low priority tasks while maintaining real-time capabilities on high priority ones. Therefore, the scheduler is useful in nodes with rechargeable batteries, as it reduces its energy consumption when battery level is low, by delaying the least critical tasks. The adaptive scheduler has been implemented and tested in real nodes, and the results show that the nodes lifetime could be increased up to 70% in some scenarios at the expense of increasing latency of low priority tasks.


Computing ◽  
2021 ◽  
Author(s):  
Antonio Brogi ◽  
Jose Carrasco ◽  
Francisco Durán ◽  
Ernesto Pimentel ◽  
Jacopo Soldani

AbstractTrans-cloud applications consist of multiple interacting components deployed across different cloud providers and at different service layers (IaaS and PaaS). In such complex deployment scenarios, fault handling and recovery need to deal with heterogeneous cloud offerings and to take into account inter-component dependencies. We propose a methodology for self-healing trans-cloud applications from failures occurring in application components or in the cloud services hosting them, both during deployment and while they are being operated. The proposed methodology enables reducing the time application components rely on faulted services, hence residing in “unstable” states where they can suddenly fail in cascade or exhibit erroneous behaviour. We also present an open-source prototype illustrating the feasibility of our proposal, which we have exploited to carry out an extensive evaluation based on controlled experiments and monkey testing.


Sign in / Sign up

Export Citation Format

Share Document