scholarly journals Flexibility-Based Energy and Demand Management in Data Centers: A Case Study for Cloud Computing

Energies ◽  
2019 ◽  
Vol 12 (17) ◽  
pp. 3301 ◽  
Author(s):  
Robert Basmadjian

The power demand (kW) and energy consumption (kWh) of data centers were augmented drastically due to the increased communication and computation needs of IT services. Leveraging demand and energy management within data centers is a necessity. Thanks to the automated ICT infrastructure empowered by the IoT technology, such types of management are becoming more feasible than ever. In this paper, we look at management from two different perspectives: (1) minimization of the overall energy consumption and (2) reduction of peak power demand during demand-response periods. Both perspectives have a positive impact on total cost of ownership for data centers. We exhaustively reviewed the potential mechanisms in data centers that provided flexibilities together with flexible contracts such as green service level and supply-demand agreements. We extended state-of-the-art by introducing the methodological building blocks and foundations of management systems for the above mentioned two perspectives. We validated our results by conducting experiments on a lab-grade scale cloud computing data center at the premises of HPE in Milano. The obtained results support the theoretical model, by highlighting the excellent potential of flexible service level agreements in Green IT: 33% of overall energy savings and 50% of power demand reduction during demand-response periods in the case of data center federation.

Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Author(s):  
Brian J. Watson ◽  
Amip J. Shah ◽  
Manish Marwah ◽  
Cullen E. Bash ◽  
Ratnesh K. Sharma ◽  
...  

The environmental impact of data centers is significant and is growing rapidly. However, there are many opportunities for greater efficiency through integrated design and management of data center components. To that end, we propose a sustainable data center that replaces conventional services in the physical infrastructures with more environmentally friendly IT services. We have identified five principles for achieving this vision: data center scale lifecycle design, flexible and configurable building blocks, pervasive sensing, knowledge discovery and visualization, and autonomous control. We describe these principles and present specific use cases for their application. Successful implementation of the sustainable data center vision will require multi-disciplinary collaboration across various research and industry communities.


Author(s):  
Sardar Khaliq Uzaman ◽  
Atta ur Rehman Khan ◽  
Junaid Shuja ◽  
Tahir Maqsood ◽  
Faisal Rehman ◽  
...  

Data center facilities play a vital role in present and forthcoming information and communication technologies. Internet giants, such as IBM, Microsoft, Google, Yahoo, and Amazon hold large data centers to provide cloud computing services and web hosting applications. Due to rapid growth in data center size and complexity, it is essential to highlight important design aspects and challenges of data centers. This article presents market segmentation of the leading data center operators and discusses the infrastructural considerations, namely energy consumption, power usage effectiveness, cost structure, and system reliability constraints. Moreover, it presents data center network design, classification of the data center servers, recent developments, and future trends of the data center industry. Furthermore, the emerging paradigm of mobile cloud computing is debated with respect to the research issues. Preliminary results for the energy consumption of task scheduling techniques are also provided.


2019 ◽  
Vol 8 (1) ◽  
pp. 18-21
Author(s):  
Lakshmi Digra ◽  
Sharanjeet Singh

Data centers are serious, energy-hungry infrastructures that can run large scale Internet based services. Energy ingesting representations are essential in designing and improving energy-efficient operations to reduce excessive energy consumption in data centers. This paper presents a survey on Energy efficiency in data centers, importance of energy efficiency. It also describes the increasing demands for data center in worldwide and the reasons for data centers energy inefficient? In this paper we define the challenges for implementing changes in data centers and explain why and how the energy requirements of data centers are growing. After that we compare the German data center market at international level and we see the energy consumption of data centers and servers in Germany from 2010 -2016.


2013 ◽  
Vol 284-287 ◽  
pp. 3597-3603
Author(s):  
Cheng Jen Tang ◽  
Miau Ru Dai

Demand response (DR) is an important ingredient and regarded as the killer application of the emerging smart grid. The continuously growing energy consumption of data centers makes data centers promising candidates with significant potential for DR. Participating in DR programs makes data centers have another finical resource in addition to service income. On the other hand, some government organizations also offer considerable incentives to promote energy saving actions for facilities with some certain certifications. Leadership in Energy and Environmental Design (LEED) rating system developed by U.S. Green Building Council (USGBC) is one of the most popular certification systems. LEED uses Power Usage Effectiveness (PUE) as one of the metrics for quantifying how energy efficient a data center is. The goal of PUE is to improve energy efficiency of a data center. DR programs require participants to temporarily reduce their power demand on some occasions with little concern regarding energy efficiency. To enjoy incentives from LEED certification, data center administrators need to know whether the participation of DR hampers the established PUE of their facilities or not. This paper examines the power consumption models from prior studies, and identifies the constraints introduced by PUE for data centers participating in DR programs. The examination reveals that the ratios of static power consumption to the dynamic power demand range of different types of data center equipment do affect PUE while taking demand reduction efforts. With this finding, facility managers of data centers have a clear picture of what to expect from the DR participation, and what to adjust of their data center equipment.


Author(s):  
Gurpreet Singh ◽  
Manish Mahajan ◽  
Rajni Mohana

BACKGROUND: Cloud computing is considered as an on-demand service resource with the applications towards data center on pay per user basis. For allocating the resources appropriately for the satisfaction of user needs, an effective and reliable resource allocation method is required. Because of the enhanced user demand, the allocation of resources has now considered as a complex and challenging task when a physical machine is overloaded, Virtual Machines share its load by utilizing the physical machine resources. Previous studies lack in energy consumption and time management while keeping the Virtual Machine at the different server in turned on state. AIM AND OBJECTIVE: The main aim of this research work is to propose an effective resource allocation scheme for allocating the Virtual Machine from an ad hoc sub server with Virtual Machines. EXECUTION MODEL: The execution of the research has been carried out into two sections, initially, the location of Virtual Machines and Physical Machine with the server has been taken place and subsequently, the cross-validation of allocation is addressed. For the sorting of Virtual Machines, Modified Best Fit Decreasing algorithm is used and Multi-Machine Job Scheduling is used while the placement process of jobs to an appropriate host. Artificial Neural Network as a classifier, has allocated jobs to the hosts. Measures, viz. Service Level Agreement violation and energy consumption are considered and fruitful results have been obtained with a 37.7 of reduction in energy consumption and 15% improvement in Service Level Agreement violation.


Author(s):  
Ratnesh Sharma ◽  
Rocky Shih ◽  
Alan McReynolds ◽  
Cullen Bash ◽  
Chandrakant Patel ◽  
...  

Fresh water is one of the few resources which is scarce and has no replacement; it is also closely coupled to energy consumption. Fresh water usage for power generation and other cooling applications is well known and accounts for 40% of total freshwater withdrawal in the U. S[1]. A significant amount of energy is embedded in the consumption of water for conveyance, treatment and distribution of water. Waste water treatment plants also consume a significant amount of energy. For example, water distribution systems and water treatment plants consume 1.3MWh and 0.5MWh[2], respectively, for every million gallons of water processed. Water consumption in data centers is often overlooked due to low cost impact compared to energy and other consumables. With the current trend towards local onsite generation[3], the role of water in data centers is more crucial than ever. Apart from actual water consumption, the impact of embedded energy in water is only beginning to be considered in water end-use analyses conducted by major utilities[4]. From a data center end-use perspective, water usage can be characterized as direct, for cooling tower operation, and indirect, for power generation to operate the IT equipment and cooling infrastructure[5]. In the past, authors have proposed and implemented metrics to evaluate direct and indirect water usage using an energy-based metric. These metrics allow assessment of water consumption at various power consumption levels in the IT infrastructure and enable comparison with other energy efficiency metrics within a data center or among several data centers[6]. Water consumption in data centers is a function of power demand, outside air temperature and water quality. While power demand affects both direct and indirect water consumption, water quality and outside air conditions affect direct water consumption. Water from data center infrastructure is directly discharged in various forms such as water vapor and effluent from cooling towers. Classification of direct water consumption is one of the first steps towards optimization of water usage. Subsequently, data center processes can be managed to reduce water intake and discharge. In this paper, we analyze water consumption from data center cooling towers and propose techniques to reuse and reduce water in the data center.


2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


Sign in / Sign up

Export Citation Format

Share Document