scholarly journals Topology-Aware Resource-Efficient Placement for High Availability Clusters Over Geo-Distributed Cloud Infrastructure

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 107234-107246 ◽  
Author(s):  
Truong-Xuan Do ◽  
Younghan Kim
Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1553
Author(s):  
Marian Rusek ◽  
Grzegorz Dwornicki

Introduction of virtualization containers and container orchestrators fundamentally changed the landscape of cloud application development. Containers provide an ideal way for practical implementation of microservice-based architecture, which allows for repeatable, generic patterns that make the development of reliable, distributed applications more approachable and efficient. Orchestrators allow for shifting the accidental complexity from inside of an application into the automated cloud infrastructure. Existing container orchestrators are centralized systems that schedule containers to the cloud servers only at their startup. In this paper, we propose a swarm-like distributed cloud management system that uses live migration of containers to dynamically reassign application components to the different servers. It is based on the idea of “pheromone” robots. An additional mobile agent process is placed inside each application container to control the migration process. The number of parallel container migrations needed to reach an optimal state of the cloud is obtained using models, experiments, and simulations. We show that in the most common scenarios the proposed swarm-like algorithm performs better than existing systems, and due to its architecture it is also more scalable and resilient to container death. It also adapts to the influx of containers and addition of new servers to the cloud automatically.


2020 ◽  
Vol 8 (5) ◽  
pp. 2040-2044

The cloud technologies are gaining boom in the field of information technology. But on the same side cloud computing sometimes results in failures. These failures demand more reliable frameworks with high availability of computers acting as nodes. The request made by the user is replicated and sent to various VMs. If one of the VMs fail, the other can respond to increase the reliability. A lot of research has been done and being carried out to suggest various schemes for fault tolerance thus increasing the reliability. Earlier schemes focus on only one way of dealing with faults but the scheme proposed by the the author in this paper presents an adaptive scheme that deals with the issues related to fault tolerance in various cloud infrastructure. The projected scheme uses adaptive behavior during the selection of replication and fine-grained checkpointing methods for attaining a reliable cloud infrastructure that can handle different client requirements. In addition to it the algorithm also determines the best suited fault tolerance method for every designated virtual node. Zheng, Zhou,. Lyu and I. King (2012).


2017 ◽  
Vol 26 (03) ◽  
pp. 1750001 ◽  
Author(s):  
Hana Teyeb ◽  
Nejib Ben Hadj-Alouane ◽  
Samir Tata ◽  
Ali Balma

In geo-distributed cloud systems, a key challenge faced by cloud providers is to optimally tune and configure the underlying cloud infrastructure. An important problem in this context, deals with finding an optimal virtual machine (VM) placement, minimizing costs, while at the same time, ensuring good system performance. Moreover, due to the fluctuations of demand and traffic patterns, it is crucial to dynamically adjust the VM placement scheme over time. It should be noted that most of the existing studies, however, dealt with this problem either by ignoring its dynamic aspect or by proposing solutions that are not suitable for a geographically distributed cloud infrastructure. In this paper, exact as well as heuristic solutions based on Integer Linear programming (ILP) formulations are proposed. Our work focuses also on the problem of scheduling the VM migration by finding the best migration sequence of intercommunicating VMs that minimizes the resulting traffic on the backbone network. The proposed algorithms execute within a reasonable time frame to readjust VM placement scheme according to the perceived demand. Our aim is to use VM migration as a tool for dynamically adjusting the VM placement scheme while minimizing the network traffic generated by VM communication and migration. Finally, we demonstrate the effectiveness of our proposed algorithms by performing extensive experiments and simulation.


Author(s):  
Denis Zolotariov

The article is devoted to the research and development of a highly available distributed automated computing system by iterative algorithms based on the microservice architecture in a cloud infrastructure. The subject of the research is the practical foundations of building high-availability automated computing systems based on microservice architecture in a cloud-based distributed infrastructure. The purpose of the article is to develop and to substantiate practical recommendations for the formation of the infrastructure of a high-availability automated computing system based on the microservice architecture, the choice of its constituent elements and their components. The task of the work: to identify the necessary structural elements of a microservice automated computing system and to analyze the constituent components and functional load for each of them, set specific tasks for building each of them and justify the choice of tools for their creation. In the course of the research, methods of system analysis were used to decompose a complex system into elements and each element into functional components, and tools: information technologies Apache Kafka, Kafkacat, Wolfram Mathematica, nginx, Lumen, Telegram, Dropbox, and MySQL. As a result of the study, it was found that the system infrastructure should consist of: fault-tolerant interservice transport, a high-availability computing microservice, and communication microservices with end customers, which save or process the results. For each of them, recommendations are provided regarding the formation and selection of implementation tools. According to the recommendations, one variant of implementation of such system has been developed, the principles of its operation are shown and the results are presented. It has been proven that when using a Kafka queue it is efficient to publish batches of results rather than one at a time, which results to significant overhead on queue servers and data latency for its clients. Recommendations are given on the implementation of the CI/CD system to build a continuous cycle of adding and improving microservices. Conclusions. Practical foundations have been developed for the implementation of high availability distributed automated computing systems based on microservice architecture in a cloud infrastructure. The flexibility in processing the results of such a system is shown due to the possibility of adding microservices and using third-party analytical applications that support connection to the Kafka queue. The economic benefit of using the described system is shown. Future ways of its improvement are given.


Author(s):  
Dapeng Wang ◽  
Jinsong Wu

This chapter discusses and surveys the concepts, demands, requirements, solutions, opportunities, challenges, and future perspectives and potential of Carrier Grade Cloud Computing (CGCC). This chapter also introduces a carrier grade distributed cloud computing architecture and discusses the benefits and advantages of carrier grade distributed cloud computing. Unlike independent cloud service providers, telecommunication operators may integrate their conventional communications networking capabilities with the new cloud infrastructure services to provide inexpensive and high quality cloud services together with their deep understandings of, and strong relationships with, individual and enterprise customers. The relevant design requirements and challenges may include the performance, scalability, service-level agreement management, security, network optimization, and unified management. The relevant key issues in CGCC designs may include cost effective hardware and software configurations, distributed infrastructure deployment models, and operation processes.


Sign in / Sign up

Export Citation Format

Share Document