Concurrent placement, capacity provisioning, and request flow control for a distributed cloud infrastructure

Author(s):  
Shuang Chen ◽  
Yanzhi Wang ◽  
Massoud Pedram
Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1553
Author(s):  
Marian Rusek ◽  
Grzegorz Dwornicki

Introduction of virtualization containers and container orchestrators fundamentally changed the landscape of cloud application development. Containers provide an ideal way for practical implementation of microservice-based architecture, which allows for repeatable, generic patterns that make the development of reliable, distributed applications more approachable and efficient. Orchestrators allow for shifting the accidental complexity from inside of an application into the automated cloud infrastructure. Existing container orchestrators are centralized systems that schedule containers to the cloud servers only at their startup. In this paper, we propose a swarm-like distributed cloud management system that uses live migration of containers to dynamically reassign application components to the different servers. It is based on the idea of “pheromone” robots. An additional mobile agent process is placed inside each application container to control the migration process. The number of parallel container migrations needed to reach an optimal state of the cloud is obtained using models, experiments, and simulations. We show that in the most common scenarios the proposed swarm-like algorithm performs better than existing systems, and due to its architecture it is also more scalable and resilient to container death. It also adapts to the influx of containers and addition of new servers to the cloud automatically.


2017 ◽  
Vol 26 (03) ◽  
pp. 1750001 ◽  
Author(s):  
Hana Teyeb ◽  
Nejib Ben Hadj-Alouane ◽  
Samir Tata ◽  
Ali Balma

In geo-distributed cloud systems, a key challenge faced by cloud providers is to optimally tune and configure the underlying cloud infrastructure. An important problem in this context, deals with finding an optimal virtual machine (VM) placement, minimizing costs, while at the same time, ensuring good system performance. Moreover, due to the fluctuations of demand and traffic patterns, it is crucial to dynamically adjust the VM placement scheme over time. It should be noted that most of the existing studies, however, dealt with this problem either by ignoring its dynamic aspect or by proposing solutions that are not suitable for a geographically distributed cloud infrastructure. In this paper, exact as well as heuristic solutions based on Integer Linear programming (ILP) formulations are proposed. Our work focuses also on the problem of scheduling the VM migration by finding the best migration sequence of intercommunicating VMs that minimizes the resulting traffic on the backbone network. The proposed algorithms execute within a reasonable time frame to readjust VM placement scheme according to the perceived demand. Our aim is to use VM migration as a tool for dynamically adjusting the VM placement scheme while minimizing the network traffic generated by VM communication and migration. Finally, we demonstrate the effectiveness of our proposed algorithms by performing extensive experiments and simulation.


Author(s):  
Dapeng Wang ◽  
Jinsong Wu

This chapter discusses and surveys the concepts, demands, requirements, solutions, opportunities, challenges, and future perspectives and potential of Carrier Grade Cloud Computing (CGCC). This chapter also introduces a carrier grade distributed cloud computing architecture and discusses the benefits and advantages of carrier grade distributed cloud computing. Unlike independent cloud service providers, telecommunication operators may integrate their conventional communications networking capabilities with the new cloud infrastructure services to provide inexpensive and high quality cloud services together with their deep understandings of, and strong relationships with, individual and enterprise customers. The relevant design requirements and challenges may include the performance, scalability, service-level agreement management, security, network optimization, and unified management. The relevant key issues in CGCC designs may include cost effective hardware and software configurations, distributed infrastructure deployment models, and operation processes.


Author(s):  
Kahina Bessai ◽  
Samir Youcef ◽  
Ammar Oulamara ◽  
Claude Godart ◽  
Selmin Nurcan

The Cloud computing paradigm is adopted for its several advantages like reduction of cost incurred when using a set of resources. However, despite the many proven benefits of using a Cloud infrastructure to run business processes, it is still faced with a major problem that can compromise its success: the lack of guidance for choosing between multiple offerings. Moreover, when running business processes it is difficult to automate all tasks and several objectives often conflicting must be taken into account. For this, the authors propose a set of scheduling strategies for business processes in Cloud contexts. More precisely, the authors propose three bi-criteria complementary approaches for scheduling business processes on distributed Cloud resources while taking into account its elastic computing characteristic that allows users to allocate and release compute resources (virtual machines) on-demand and its business model based on pay as you go. Therefore, it is reasonable to assume that the number of virtual machine is infinite while the number of human resources is finite. Experiment results demonstrate that the proposed approaches present good performances.


Sign in / Sign up

Export Citation Format

Share Document