SinergyCloud: A simulator for evaluation of energy consumption in data centers and hybrid clouds

Author(s):  
Daniel G. Lago ◽  
Rodrigo A.C. da Silva ◽  
Edmundo R.M. Madeira ◽  
Nelson L.S. da Fonseca ◽  
Deep Medhi
Author(s):  
Uschas Chowdhury ◽  
Manasa Sahini ◽  
Ashwin Siddarth ◽  
Dereje Agonafer ◽  
Steve Branton

Modern day data centers are operated at high power for increased power density, maintenance, and cooling which covers almost 2 percent (70 billion kilowatt-hours) of the total energy consumption in the US. IT components and cooling system occupy the major portion of this energy consumption. Although data centers are designed to perform efficiently, cooling the high-density components is still a challenge. So, alternative methods to improve the cooling efficiency has become the drive to reduce the cooling cost. As liquid cooling is more efficient for high specific heat capacity, density, and thermal conductivity, hybrid cooling can offer the advantage of liquid cooling of high heat generating components in the traditional air-cooled servers. In this experiment, a 1U server is equipped with cold plate to cool the CPUs while the rest of the components are cooled by fans. In this study, predictive fan and pump failure analysis are performed which also helps to explore the options for redundancy and to reduce the cooling cost by improving cooling efficiency. Redundancy requires the knowledge of planned and unplanned system failures. As the main heat generating components are cooled by liquid, warm water cooling can be employed to observe the effects of raised inlet conditions in a hybrid cooled server with failure scenarios. The ASHRAE guidance class W4 for liquid cooling is chosen for our experiment to operate in a range from 25°C – 45°C. The experiments are conducted separately for the pump and fan failure scenarios. Computational load of idle, 10%, 30%, 50%, 70% and 98% are applied while powering only one pump and the miniature dry cooler fans are controlled externally to maintain constant inlet temperature of the coolant. As the rest of components such as DIMMs & PCH are cooled by air, maximum utilization for memory is applied while reducing the number fans in each case for fan failure scenario. The components temperatures and power consumption are recorded in each case for performance analysis.


2018 ◽  
Vol 7 (2.8) ◽  
pp. 550 ◽  
Author(s):  
G Anusha ◽  
P Supraja

Cloud computing is a growing technology now-a-days, which provides various resources to perform complex tasks. These complex tasks can be performed with the help of datacenters. Data centers helps the incoming tasks by providing various resources like CPU, storage, network, bandwidth and memory, which has resulted in the increase of the total number of datacenters in the world. These data centers consume large volume of energy for performing the operations and which leads to high operation costs. Resources are the key cause for the power consumption in data centers along with the air and cooling systems. Energy consumption in data centers is comparative to the resource usage. Excessive amount of energy consumption by datacenters falls out in large power bills. There is a necessity to increase the energy efficiency of such data centers. We have proposed an Energy aware dynamic virtual machine consolidation (EADVMC) model which focuses on pm selection, vm selection, vm placement phases, which results in the reduced energy consumption and the Quality of service (QoS) to a considerable level.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Author(s):  
Federico Larumbe ◽  
Brunilde Sansò

This chapter addresses a set of optimization problems that arise in cloud computing regarding the location and resource allocation of the cloud computing entities: the data centers, servers, software components, and virtual machines. The first problem is the location of new data centers and the selection of current ones since those decisions have a major impact on the network efficiency, energy consumption, Capital Expenditures (CAPEX), Operational Expenditures (OPEX), and pollution. The chapter also addresses the Virtual Machine Placement Problem: which server should host which virtual machine. The number of servers used, the cost, and energy consumption depend strongly on those decisions. Network traffic between VMs and users, and between VMs themselves, is also an important factor in the Virtual Machine Placement Problem. The third problem presented in this chapter is the dynamic provisioning of VMs to clusters, or auto scaling, to minimize the cost and energy consumption while satisfying the Service Level Agreements (SLAs). This important feature of cloud computing requires predictive models that precisely anticipate workload dimensions. For each problem, the authors describe and analyze models that have been proposed in the literature and in the industry, explain advantages and disadvantages, and present challenging future research directions.


Author(s):  
Rashmi Rai ◽  
G. Sahoo

The ever-rising demand for computing services and the humongous amount of data generated everyday has led to the mushrooming of power craving data centers across the globe. These large-scale data centers consume huge amount of power and emit considerable amount of CO2.There have been significant work towards reducing energy consumption and carbon footprints using several heuristics for dynamic virtual machine consolidation problem. Here we have tried to solve this problem a bit differently by making use of utility functions, which are widely used in economic modeling for representing user preferences. Our approach also uses Meta heuristic genetic algorithm and the fitness is evaluated with the utility function to consolidate virtual machine migration within cloud environment. The initial results as compared with existing state of art shows marginal but significant improvement in energy consumption as well as overall SLA violations.


Sign in / Sign up

Export Citation Format

Share Document