Differentiated Bandwidth Guarantees for Cloud Data Centers

2013 ◽  
Vol 14 (03) ◽  
pp. 1360002
Author(s):  
YANGYANG LI ◽  
HONGBO WANG ◽  
JIANKANG DONG ◽  
JUNBO LI ◽  
SHIDUAN CHENG

By means of virtualization, computing and storage resources are effectively multiplexed by different applications in cloud data centers. However, there lacks useful approaches to share the internal network resource of cloud data centers. Invalid network sharing not only degrade the performance of applications, but also affect the efficiency of data center operation. To guarantee network performance of applications and provide fine-grained service differentiation, in this paper, we propose a differentiated bandwidth guarantee scheme for data center networks. Utility functions are constructed according to the throughput and delay sensitive characteristics of different applications. Aiming to maximize the utility of all applications, the problem is formulated as a multi-objective optimization problem. We solve this problem using a heuristic algorithm: the elitist Non-Dominated Sorted Genetic Algorithm-II(NSGA-II), and we make a multi-attribute decision to refine the solutions. Extensive simulations are conducted to show that our scheme provides minimum band-width guarantees and achieves more fine-grained service differentiation than existing approaches. The simulation also verifies that the proposed mechanism is suitable for arbitrary data center architectures.

Load balancing algorithms and service broker policies plays a crucial role in determining the performance of cloud systems. User response time and data center request servicing time are largely affected by the load balancing algorithms and service broker policies. Several load balancing algorithms and service broker polices exist in the literature to perform the data center allocation and virtual machine allocation for the given set of user requests. In this paper, we investigate the performance of equally spread current execution (ESCE) based load balancing algorithm with closest data center(CDC) service broker policy in a cloud environment that consists of homogeneous and heterogeneous device characteristics in data centers and heterogeneous communication bandwidth that exist between different regions where cloud data centers are deployed. We performed a simulation using CloudAnalyst an open source software with different settings of device characteristics and bandwidth. The user response time and data center request servicing time are found considerably less in heterogeneous environment.


Author(s):  
Deepika T. ◽  
Prakash P.

The flourishing development of the cloud computing paradigm provides several services in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.


2021 ◽  
Vol 17 (3) ◽  
pp. 155014772199721
Author(s):  
Mueen Uddin ◽  
Mohammed Hamdi ◽  
Abdullah Alghamdi ◽  
Mesfer Alrizq ◽  
Mohammad Sulleman Memon ◽  
...  

Cloud computing is a well-known technology that provides flexible, efficient, and cost-effective information technology solutions for multinationals to offer improved and enhanced quality of business services to end-users. The cloud computing paradigm is instigated from grid and parallel computing models as it uses virtualization, server consolidation, utility computing, and other computing technologies and models for providing better information technology solutions for large-scale computational data centers. The recent intensifying computational demands from multinationals enterprises have motivated the magnification for large complicated cloud data centers to handle business, monetary, Internet, and commercial applications of different enterprises. A cloud data center encompasses thousands of millions of physical server machines arranged in racks along with network, storage, and other equipment that entails an extensive amount of power to process different processes and amenities required by business firms to run their business applications. This data center infrastructure leads to different challenges like enormous power consumption, underutilization of installed equipment especially physical server machines, CO2 emission causing global warming, and so on. In this article, we highlight the data center issues in the context of Pakistan where the data center industry is facing huge power deficits and shortcomings to fulfill the power demands to provide data and operational services to business enterprises. The research investigates these challenges and provides solutions to reduce the number of installed physical server machines and their related device equipment. In this article, we proposed server consolidation technique to increase the utilization of already existing server machines and their workloads by migrating them to virtual server machines to implement green energy-efficient cloud data centers. To achieve this objective, we also introduced a novel Virtualized Task Scheduling Algorithm to manage and properly distribute the physical server machine workloads onto virtual server machines. The results are generated from a case study performed in Pakistan where the proposed server consolidation technique and virtualized task scheduling algorithm are applied on a tier-level data center. The results obtained from the case study demonstrate that there are annual power savings of 23,600 W and overall cost savings of US$78,362. The results also highlight that the utilization ratio of already existing physical server machines has increased to 30% compared to 10%, whereas the number of server machines has reduced to 50% contributing enormously toward huge power savings.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Chunxia Yin ◽  
Jian Liu ◽  
Shunfu Jin

In recent years, the energy consumption of cloud data centers has continued to increase. A large number of servers run at a low utilization rate, which results in a great waste of power. To save more energy in a cloud data center, we propose an energy-efficient task-scheduling mechanism with switching on/sleep mode of servers in the virtualized cloud data center. The key idea is that when the number of idle VMs reaches a specified threshold, the server with the most idle VMs will be switched to sleep mode after migrating all the running tasks to other servers. From the perspective of the total number of tasks and the number of servers in sleep mode in the system, we establish a two-dimensional Markov chain to analyse the proposed energy-efficient mechanism. By using the method of the matrix-geometric solution, we mathematically estimate the energy consumption and the response performance. Both numerical and simulated experiments show that our proposed energy-efficient mechanism can effectively reduce the energy consumption and guarantee the response performance. Finally, by constructing a cost function, the number of VMs hosted on each server is optimized.


2018 ◽  
Vol 113 ◽  
pp. 14-25 ◽  
Author(s):  
Wei Jiang ◽  
Wanchun Jiang ◽  
Weiping Wang ◽  
Haodong Wang ◽  
Yi Pan ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2107
Author(s):  
Jaehak Lee ◽  
Heonchang Yu

With the evolution of cloud technology, the number of user applications is increasing, and computational workloads are becoming increasingly diverse and unpredictable. However, cloud data centers still exhibit a low I/O performance because of the scheduling policies employed, which are based on the degree of physical CPU (pCPU) occupancy. Notably, existing scheduling policies cannot guarantee good I/O performance because of the uncertainty of the extent of I/O occurrence and the lack of fine-grained workload classification. To overcome these limitations, we propose ISACS, an I/O strength-aware credit scheduler for virtualized environments. Based on the Credit2 scheduler, ISACS provides a fine-grained workload-aware scheduling technique to mitigate I/O performance degradation in virtualized environments. Further, ISACS uses the event channel mechanism in the virtualization architecture to expand the scope of the scheduling information area and measures the I/O strength of each virtual CPU (vCPU) in the run-queue. Then, ISACS allocates two types of virtual credits for all vCPUs in the run-queue to increase I/O performance and concurrently prevent CPU performance degradation. Finally, through I/O load balancing, ISACS prevents I/O-intensive vCPUs from becoming concentrated on specific cores. Our experiments show that compared with existing virtualization environments, ISACS provides a higher I/O performance with a negligible impact on CPU performance.


2020 ◽  
Vol 50 (6) ◽  
pp. 805-826
Author(s):  
Daniel Rosendo ◽  
Demis Gomes ◽  
Guto Leoni Santos ◽  
Leylane Silva ◽  
Andre Moreira ◽  
...  

Author(s):  
Cail Song ◽  
Bin Liang ◽  
Jiao Li

Recently, the virtual machine deployment algorithm uses physical machine less or consumes higher energy in data centers, resulting in declined service quality of cloud data centers or rising operational costs, which leads to a decrease in cloud service provider’s earnings finally. According to this situation, a resource clustering algorithm for cloud data centers is proposed. This algorithm systematically analyzes the cloud data center model and physical machine’s use ratio, establishes the dynamic resource clustering rules through k-means clustering algorithm, and deploys the virtual machines based on clustering results, so as to promote the use ratio of physical machine and bring down energy consumption in cloud data centers. The experimental results indicate that, regarding the compute-intensive virtual machines in cloud data centers, compared to contrast algorithm, the physical machine’s use ratio of this algorithm is improved by 12% on average, and its energy consumption in cloud data center is lowered by 15% on average. Regarding the general-purpose virtual machines in cloud data center, compared to contrast algorithm, the physical machine’s use ratio is improved by 14% on average, and its energy consumption in cloud data centers is lowered by 12% on average. Above results demonstrate that this method shows a good effect in the resource management of cloud data centers, which may provide reference to some extent.


2021 ◽  
Vol 51 (4) ◽  
pp. 15-22
Author(s):  
Arjun Devraj ◽  
Liang Wang ◽  
Jennifer Rexford

Refraction networking is a promising censorship circumvention technique in which a participating router along the path to an innocuous destination deflects traffic to a covert site that is otherwise blocked by the censor. However, refraction networking faces major practical challenges due to performance issues and various attacks (e.g., routing-around-the-decoy and fingerprinting). Given that many sites are now hosted in the cloud, data centers offer an advantageous setting to implement refraction networking due to the physical proximity and similarity of hosted sites. We propose REDACT, a novel class of refraction networking solutions where the decoy router is a border router of a multi-tenant data center and the decoy and covert sites are tenants within the same data center. We highlight one specific example REDACT protocol, which leverages TLS session resumption to address the performance and implementation challenges in prior refraction networking protocols. REDACT also offers scope for other designs with different realistic use cases and assumptions.


Sign in / Sign up

Export Citation Format

Share Document