An Energy-Efficient Resource Scheduling Algorithm for Cloud Computing based on Resource Equivalence Optimization

Author(s):  
Li Mao ◽  
De Yu Qi ◽  
Wei Wei Lin ◽  
Bo Liu ◽  
Ye Da Li

With the rapid growth of energy consumption in global data centers and IT systems, energy optimization has become an important issue to be solved in cloud data center. By introducing heterogeneous energy constraints of heterogeneous physical servers in cloud computing, an energy-efficient resource scheduling model for heterogeneous physical servers based on constraint satisfaction problems is presented. The method of model solving based on resource equivalence optimization is proposed, in which the resources in the same class are pruning treatment when allocating resource so as to reduce the solution space of the resource allocation model and speed up the model solution. Experimental results show that, compared with DynamicPower and MinPM, the proposed algorithm (EqPower) not only improves the performance of resource allocation, but also reduces energy consumption of cloud data center.

Author(s):  
Wei-Wei Lin ◽  
Liang Tan ◽  
James Z. Wang

Energy efficiency is one of the most important design considerations for a cloud data center. Recent approaches to the energy-efficient resource management for data centers usually model the problem as a bin packing problem with the goal of minimizing the number of physical machines (PMs) employed. However, minimizing the number of PMs may not necessarily minimize the energy consumption in a heterogeneous cloud environment. To address the problem, this paper models the resource allocation problem in a heterogeneous cloud data center as a constraint satisfaction problem (CSP). By solving this constraint satisfaction problem, an optimal resource allocation scheme, which includes a virtual machine provision algorithm and a virtual machine packing algorithm, is designed to minimize the energy consumption in a virtualized heterogeneous cloud data center. Performance studies show that this proposed new scheme outperforms the existing bin-packing based approaches in terms of energy consumption in heterogeneous cloud data centers.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


2017 ◽  
Vol 10 (13) ◽  
pp. 162
Author(s):  
Amey Rivankar ◽  
Anusooya G

Cloud computing is the latest trend in large-scale distributed computing. It provides diverse services on demand to distributive resources such asservers, software, and databases. One of the challenging problems in cloud data centers is to manage the load of different reconfigurable virtual machines over one another. Thus, in the near future of cloud computing field, providing a mechanism for efficient resource management will be very significant. Many load balancing algorithms have been already implemented and executed to manage the resources efficiently and adequately. The objective of this paper is to analyze shortcomings of existing algorithms and implement a new algorithm which will give optimized load balancingresult.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Chunxia Yin ◽  
Jian Liu ◽  
Shunfu Jin

In recent years, the energy consumption of cloud data centers has continued to increase. A large number of servers run at a low utilization rate, which results in a great waste of power. To save more energy in a cloud data center, we propose an energy-efficient task-scheduling mechanism with switching on/sleep mode of servers in the virtualized cloud data center. The key idea is that when the number of idle VMs reaches a specified threshold, the server with the most idle VMs will be switched to sleep mode after migrating all the running tasks to other servers. From the perspective of the total number of tasks and the number of servers in sleep mode in the system, we establish a two-dimensional Markov chain to analyse the proposed energy-efficient mechanism. By using the method of the matrix-geometric solution, we mathematically estimate the energy consumption and the response performance. Both numerical and simulated experiments show that our proposed energy-efficient mechanism can effectively reduce the energy consumption and guarantee the response performance. Finally, by constructing a cost function, the number of VMs hosted on each server is optimized.


2021 ◽  
Author(s):  
ARIF ullah ◽  
Irshad Ahmed Abbasi ◽  
Muhammad Zubair Rehman ◽  
Tanweer Alam ◽  
Hanane Aznaoui

Abstract Infrastructure service model provides different kinds of virtual computing resources such as networking, storage service, and hardware as per user demands. Host load prediction is an important element in cloud computing for improvement in the resource allocation systems. Hosting initialization issues still exist in cloud computing due to this problem hardware resource allocation takes serval minutes of delay in the response process. To solve this issue prediction techniques are used for proper prediction in the cloud data center to dynamically scale the cloud in order for maintaining a high quality of services. Therefore in this paper, we propose a hybrid convolutional neural network long with short-term memory model for host prediction. In the proposed hybrid model, vector auto regression method is firstly used to input the data for analysis which filters the linear interdependencies among the multivariate data. Then the enduring data are computed and entered into the convolutional neural network layer that extracts complex features for each central processing unit and virtual machine usage components after that long short-term memory is used which is suitable for modeling temporal information of irregular trends in time series components. In all process, the main contribution is that we used scaled polynomial constant unit activation function which is most suitable for this kind of model. Due to the higher inconsistency in data center, accurate prediction is important in cloud systems. For this reason in this paper two real-world load traces were used to evaluate the performance. One is the load trace in the Google data center, while the other is in the traditional distributed system. The experiment results show that our proposed method achieves state-of-the-art performance with higher accuracy in both datasets as compared with ARIMA-LSTM, VAR-GRU, VAR-MLP, and CNN models.


2014 ◽  
Vol 2 (4) ◽  
pp. 32-51 ◽  
Author(s):  
Zhihui Lu ◽  
◽  
Soichi Takashige ◽  
Yumiko Sugita ◽  
Tomohiro Morimura ◽  
...  

Author(s):  
Sudhansu Shekhar Patra

Energy saving in a Cloud Computing environment is a multidimensional challenge, which can directly decrease the in-use costs and carbon dioxide emission, while raising the system consistency. The process of maximizing the cloud computing resource utilization which brings many benefits such as better use of resources, rationalization of maintenance, IT service customization, QoS and reliable services, etc., is known as task consolidation. This article suggests the energy saving with task consolidation, by minimizing the number of unused resources in a cloud computing environment. In this article, various task consolidation algorithms such as MinIncreaseinEnergy, MaxUtilECTC, NoIdleMachineECTC, and NoIdleMachineMaxUtil are presented aims to optimize energy consumption of cloud data center. The outcomes have shown that the suggested algorithms surpass the existing ECTC and FCFSMaxUtil, MaxMaxUtil algorithms in terms of the CPU utilization and energy consumption.


Cloud computing has led to the tremendous growth of IT organizations, which serves as the means of delivering services to large number of consumers globally, by providing anywhere, anytime easy access to resources and services. The primary concern over the increasing energy consumption by cloud data centers is mainly due to the massive emission of greenhouse gases, which contaminate the atmosphere and tend to worsen the environmental conditions. The major part of huge energy consumption comes from large servers, high speed storage devices and cooling equipment, present in cloud data centers. These serve as the basis for fulfilling the increasing need for computing resources. These in turn bestow additional cost of resources. The goal is to focus on energy savings through effective utilization of resources. This necessitates the need for developing a green-aware, energy-efficient framework for cloud data center networks. The Software Defined Networking (SDN) are chosen as they aid in studying the behaviour of networks from the overall perspective of software layer, rather than decisions from each individual device, as in case of conventional networks. The central objective of this paper is dedicated to survey on various existing SDN based energy efficient cloud data center networks.


Sign in / Sign up

Export Citation Format

Share Document