A Decadal Walkthrough on Energy Modelling for Cloud Datacenters

Author(s):  
Ahan Chatterjee

Cloud computing is the growing field in the industry, and every scale industry needs it now. The high scale usage of cloud has resulted in huge power consumption, and this power consumption has led to increase of carbon footprint affecting our mother nature. Thus, we need to optimize the power usage in the cloud servers. Various models are used to tackle this situation, of which one is a model based on link load. It minimized the bit energy consumption of network usage which includes energy efficiency routing and load balancing. Over this, multi-constraint rerouting is also adapted. Other power models which have been adapted are virtualization framework using multi-tenancy-oriented data center. It works by accommodating heterogeneous networks among virtual machines in virtual private cloud. Another strategy that is adopted is cloud partitioning concept using game theory. Other methods that are adopted are load spreading algorithm by shortest path bridging, load balancing by speed scaling, load balancing using graph constraint, and insert ranking method.

2020 ◽  
Vol 10 (7) ◽  
pp. 2323
Author(s):  
T. Renugadevi ◽  
K. Geetha ◽  
K. Muthukumar ◽  
Zong Woo Geem

Drastic variations in high-performance computing workloads lead to the commencement of large number of datacenters. To revolutionize themselves as green datacenters, these data centers are assured to reduce their energy consumption without compromising the performance. The energy consumption of the processor is considered as an important metric for power reduction in servers as it accounts to 60% of the total power consumption. In this research work, a power-aware algorithm (PA) and an adaptive harmony search algorithm (AHSA) are proposed for the placement of reserved virtual machines in the datacenters to reduce the power consumption of servers. Modification of the standard harmony search algorithm is inevitable to suit this specific problem with varying global search space in each allocation interval. A task distribution algorithm is also proposed to distribute and balance the workload among the servers to evade over-utilization of servers which is unique of its kind against traditional virtual machine consolidation approaches that intend to restrain the number of powered on servers to the minimum as possible. Different policies for overload host selection and virtual machine selection are discussed for load balancing. The observations endorse that the AHSA outperforms, and yields better results towards the objective than, the PA algorithm and the existing counterparts.


Author(s):  
Alekhya Orugonda ◽  
V. Kiran Kumar

Background: It is important to minimize bandwidth that improves battery life, system reliability and other environmental concerns and energy optimization.It also do everything within their power to reduce the amount of data that flows through their pipes.To increase resource exertion, task consolidation is an effective technique, greatly enabled by virtualization technologies, which facilitate the concurrent execution of several tasks and, in turn, reduce energy consumption. : MaxUtil, which aims to maximize resource exertion, and Energy Conscious Task Consolidation which explicitly takes into account both active and idle energy consumption. Method: In this paper an Energy Aware Cloud Load Balancing Technique (EACLBT) is proposed for the performance improvement in terms of energy and run time. It predicts load of host after VM allocation and if according to prediction host become overloaded than VM will be created on different host. So it minimize the number of migrations due to host overloading conditions. This proposed technique results in minimize bandwidth and energy utilization. Results: The result shows that the energy efficient method has been proposed for monitor energy exhaustion and support static and dynamic system level optimization.The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving.Besides minimization in bandwidth along with energy exertion, reduction in the number of executed instructions is also achieved. Conclusion: This paper comprehensively describes the EACLBT (Energy Aware Cloud Load Balancing Technique) to deploy the virtual machines for power saving purpose. The average power consumption is used as performance metrics and the result of PALB is used as baseline. The EACLBT can reduce the number of power-on physical machine and average power consumption compare to other deploy algorithms with power saving. It shown that on average an idle server consumes approximately 70% of the power consumed by the server running at the full CPU speed.The performance holds better for Common sub utterance elimination. So, we can say the proposed Energy Aware Cloud Load Balancing Technique (EACLBT) is effective in bandwidth minimization and reduction of energy exertion.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Zhihui Du ◽  
Rong Ge ◽  
Victor W. Lee ◽  
Richard Vuduc ◽  
David A. Bader ◽  
...  

We describe a family of power models that can capture the nonuniform power effects of speed scaling among homogeneous cores on multicore processors. These models depart from traditional ones, which assume that individual cores contribute to power consumption as independent entities. In our approach, we remove this independence assumption and employ statistical variables of core speed (average speed and the dispersion of the core speeds) to capture the comprehensive heterogeneous impact of subtle interactions among the underlying hardware. We systematically explore the model family, deriving basic and refined models that give progressively better fits, and analyze them in detail. The proposed methodology provides an easy way to build power models to reflect the realistic workings of current multicore processors more accurately. Moreover, unlike the existing lower-level power models that require knowledge of microarchitectural details of the CPU cores and the last level cache to capture core interdependency, ours are easier to use and scalable to emerging and future multicore architectures with more cores. These attributes make the models particularly useful to system users or algorithm designers who need a quick way to estimate power consumption. We evaluate the family of models on contemporary x86 multicore processors using the SPEC2006 benchmarks. Our best model yields an average predicted error as low as 5%.


Over the past few years, there has been keen research interest in load balancing and task scheduling in the cloud as the extensive amount of data that is stored in the server leads to significantly increased load. This can be resolved by using a hybrid algorithm in which the honeybee behavior algorithm’s advantages are integrated with fuzzy logic to conduct task scheduling and as well as balancing in the cloud. The design of this hybrid algorithm aims to enhance prior approaches. It is developed as per ABC and merges the important QoS factors along with power consumption so that the power that virtual machines (VMs) consume on the host can be precisely assessed, thereby ensuring efficient load balancing algorithm. The present study aims to evaluate the VMs’ power consumption by taking into account crucial QoS factors for selecting which host and virtual machine will be best suited for receiving the task. CloudSim was used to simulate the ILBA_HB algorithm. In terms of makespan, average response time, and degree of imbalance, the performance of the ILBA HB algorithm is compared to that of the LBA HB and HBB-LB algorithms. According to the results, the proposed algorithm outperformed LBA_HB and HBB-LB.


2019 ◽  
Vol 8 (3) ◽  
pp. 8527-8531

Power consumption-Traffic aware-Improved Resource Intensity Aware Load balancing (PT-IRIAL) method was proposed to balance load in cloud computing by choosing the migration Virtual Machines (VMs) and the destination Physical Machines (PMs). In this paper, an Artificial Intelligence (AI) technique called Reinforcement Learning (RL) is introduced to find out an optimal time to migrate the selected VM to the selected destination PM. RL enables an agent to find out the most appropriate time for VM migration based on the resource utilization, power consumption, temperature and traffic demand. RL is incorporated into the cloud environment by creating multiple state and action space. The state space is obtained through the computation of resource utilization, power consumption, temperature and traffic of selected VMs. The action space is represented as wait or migrate which is learned through a reward function. Based on the action space, the selected VMs are waiting or migrating to the selected destination PMs.


Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Load balancing of non-preemptive independent tasks on virtual machines (VMs) is an important aspect of task scheduling in clouds. Whenever certain VMs are overloaded and remaining VMs are under loaded with tasks for processing, the load has to be balanced to achieve optimal machine utilization. In this paper, we propose an algorithm named honey bee behavior inspired load balancing, which aims to achieve well balanced load across virtual machines for maximizing the throughput. The proposed algorithm also balances the priorities of tasks on the machines in such a way that the amount of waiting time of the tasks in the queue is minimal. We have compared the proposed algorithm with existing load balancing and scheduling algorithms. The experimental results show that the algorithm is effective when compared with existing algorithms. Our approach illustrates that there is a significant improvement in average execution time and reduction in waiting time of tasks on queue using workflowsim simulator in JAVA.


2020 ◽  
Vol 53 (5) ◽  
pp. 1-41 ◽  
Author(s):  
Weiwei Lin ◽  
Fang Shi ◽  
Wentai Wu ◽  
Keqin Li ◽  
Guangxin Wu ◽  
...  

2021 ◽  
Author(s):  
Rashid Khogali

We synthesize online scheduling algorithms to optimally assign a set of arriving heterogeneous tasks to heterogeneous speed-scalable processors under the single threaded computing architecture. By using dynamic speed-scaling, where each processor's speed is able to dynamically change within hardware and software processing constraints, the goal of our algorithms is to minimize the total financial cost (in dollars) of response time and energy consumption (TCRTEC) of the tasks. In our work, the processors are heterogeneous in that they may differ in their hardware specifications with respect to maximum processing rate, power function parameters and energy sources. Tasks are heterogeneous in terms of computation volume, memory and minimum processing requirements. We also consider that the unit price of response time for each task is heterogeneous because the user may be willing to pay higher/lower unit prices for certain tasks, thereby increasing/decreasing their optimum processing rates. We model the overhead loading time incurred when a task is loaded by a given processor prior to its execution and assume it to be heterogeneous as well. Under the single threaded, single buffered computing architecture, we synthesize the SBDPP algorithm and its two other versions. Its first two versions allow the user to specify the unit price of energy and response time for executing each arriving task. The algorithm's second version extends the functionality of the first by allowing the user or the OS of the computing device to further modify a task's unit price of time or energy in order to achieve a linearly controlled operation point that lies somewhere in the economy-performance mode continuum of a task's execution. The algorithm's third version operates exclusively on the latter. We briefly extend the algorithm and its versions to consider migration, where an unfinished task is paused and resumed on another processor. The SBDPP algorithm is qualitatively compared against its two other versions. The SBDPP dispatcher is analytically shown to perform better than the well known Round Robin dispatcher in terms of the TCRTEC performance metric. Through simulations we deduce a relationship between the arrival rate of tasks, number of processors and response time of tasks. Under the Single threaded, multi-buffered computing architecture we have four contributions that constitute the SMBSPP algorithm. First, we propose a novel task dispatching strategy for assigning the tasks to the processors. Second, we propose a novel preemptive service discipline called Smallest remaining Computation Volume Per unit Price of response Time (SCVPPT) to schedule the tasks on the assigned processor. Third, we propose a dynamic speed-scaling function that explicitly determines the optimum processing rate of each task. Most of the simulations consider both stochastic and deterministic traffic conditions. Our simulation results show that SCVPPT outperforms the two known service disciplines, Shortest Remaining Processing Time (SRPT) and the First Come First Serve (FCFS), in terms of minimizing the TCRTEC performance metric. The results also show that the algorithm's dispatcher drastically outperforms the well known Round Robin dispatcher with cost savings exceeding 100% even when the processors are mildly heterogeneous. Finally, analytical and simulation results show that our speed scaling function performs better than a comparable speed scaling function in current literature. Under a fixed budget of energy, we synthesize the SMBAD algorithm which uses the micro-economic laws of Supply and Demand (LSD) to heuristically adjust the unit price of energy in order to extend battery life and execute more than 50% of tasks on a single processor (under the single threaded, multi buffered computing architecture). By extending all our multiprocessor algorithms to factor independent (battery) energy sources that is associated with each processor, we analytically show that load balancing effects are induced on hetergeneous parallel processors. This happens when the unit price of energy is adjusted by the battery level of each processor in accordance with LSD. Furthermore, we show that a variation of this load balancing effect also occurs when the heterogeneous processors use a single battery as long as they operate at unconstrained processing rates.


Sign in / Sign up

Export Citation Format

Share Document