Data Center Simulation for Resource Utilization Analysis at Extended Loads

Author(s):  
Mathew Koshy Karunattu ◽  
S. Sayanth ◽  
Akash Vijayan ◽  
Ansamma John ◽  
Manu J Pillai
2016 ◽  
Vol 2016 ◽  
pp. 1-13 ◽  
Author(s):  
HeeSeok Choi ◽  
JongBeom Lim ◽  
Heonchang Yu ◽  
EunYoung Lee

We consider a cloud data center, in which the service provider supplies virtual machines (VMs) on hosts or physical machines (PMs) to its subscribers for computation in an on-demand fashion. For the cloud data center, we propose a task consolidation algorithm based on task classification (i.e., computation-intensive and data-intensive) and resource utilization (e.g., CPU and RAM). Furthermore, we design a VM consolidation algorithm to balance task execution time and energy consumption without violating a predefined service level agreement (SLA). Unlike the existing research on VM consolidation or scheduling that applies none or single threshold schemes, we focus on a double threshold (upper and lower) scheme, which is used for VM consolidation. More specifically, when a host operates with resource utilization below the lower threshold, all the VMs on the host will be scheduled to be migrated to other hosts and then the host will be powered down, while when a host operates with resource utilization above the upper threshold, a VM will be migrated to avoid using 100% of resource utilization. Based on experimental performance evaluations with real-world traces, we prove that our task classification based energy-aware consolidation algorithm (TCEA) achieves a significant energy reduction without incurring predefined SLA violations.


2014 ◽  
Vol 644-650 ◽  
pp. 2961-2964
Author(s):  
Xiao Long Tan ◽  
Wen Bin Wang ◽  
Yu Qin Yao

With the rapid grow of the volume of data and internet application, as an efficient and promising infrastructure, data center has been widely deployed .data center provide a variety of perform for network services, applications such as video stream, cloud compute and so on. All this services and applications call for volume, compute, bandwidth, and latency. Existing data centers lacks enough flexible so they provide poor support in QOS, deployability, manageability, and defense when facing attacks. Virtualized data centers are a good solution to these problems. Compared to existing data centers, virtualized data centers do better in resource utilization, scalability, and flexibility.


Efficient resource utilization plays a vital role in cloud computing since the shared computational power of the resources is offered on demand. During dynamic resource allocation sometimes a server may be over utilized or underutilized thus leading to excess of energy consumption in the data centers. So the proposed system calculates the over utilization and underutilization of a CPU and RAM usage and also considers the network bandwidth usage to reduce power consumption in the cloud data center. Hence, a novel method is used for minimizing power consumption in the data center


Author(s):  
SIVARANJANI BALAKRISHNAN ◽  
SURENDRAN DORAISWAMY

Data centers are becoming the main backbone of and centralized repository for all cloud-accessible services in on-demand cloud computing environments. In particular, virtual data centers (VDCs) facilitate the virtualization of all data center resources such as computing, memory, storage, and networking equipment as a single unit. It is necessary to use the data center efficiently to improve its profitability. The essential factor that significantly influences efficiency is the average number of VDC requests serviced by the infrastructure provider, and the optimal allocation of requests improves the acceptance rate. In existing VDC request embedding algorithms, data center performance factors such as resource utilization rate and energy consumption are not taken into consideration. This motivated us to design a strategy for improving the resource utilization rate without increasing the energy consumption. We propose novel VDC embedding methods based on row-epitaxial and batched greedy algorithms inspired by bioinformatics. These algorithms embed new requests into the VDC while reembedding previously allocated requests. Reembedding is done to consolidate the available resources in the VDC resource pool. The experimental testbed results show that our algorithms boost the data center objectives of high resource utilization (by improving the request acceptance rate), low energy consumption, and short VDC request scheduling delay, leading to an appreciable return on investment.


2021 ◽  
Vol 39 (1B) ◽  
pp. 203-208
Author(s):  
Haider A. Ghanem ◽  
Rana F. Ghani ◽  
Maha J. Abbas

Data centers are the main nerve of the Internet because of its hosting, storage, cloud computing and other services. All these services require a lot of work and resources, such as energy and cooling. The main problem is how to improve the work of data centers through increased resource utilization by using virtual host simulations and exploiting all server resources. In this paper, we have considered memory resources, where Virtual machines were distributed to hosts after comparing the virtual machines with the host from where the memory and putting the virtual machine on the appropriate host, this will reduce the host machines in the data centers and this will improve the performance of the data centers, in terms of power consumption and the number of servers used and cost.


Electronics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 67
Author(s):  
Qazi Zia Ullah ◽  
Gul Muhammad Khan ◽  
Shahzad Hassan ◽  
Asif Iqbal ◽  
Farman Ullah ◽  
...  

Cloud computing use is exponentially increasing with the advent of industrial revolution 4.0 technologies such as the Internet of Things, artificial intelligence, and digital transformations. These technologies require cloud data centers to process massive volumes of workloads. As a result, the data centers consume gigantic amounts of electrical energy, and a large portion of data center electrical energy comes from fossil fuels. It causes greenhouse gas emissions and thus ensuing in global warming. An adaptive resource utilization mechanism of cloud data center resources is vital to get by with this huge problem. The adaptive system will estimate the resource utilization and then adjust the resources accordingly. Cloud resource utilization estimation is a two-fold challenging task. First, the cloud workloads are sundry, and second, clients’ requests are uneven. In the literature, several machine learning models have estimated cloud resources, of which artificial neural networks (ANNs) have shown better performance. Conventional ANNs have a fixed topology and allow only to train their weights either by back-propagation or neuroevolution such as a genetic algorithm. In this paper, we propose Cartesian genetic programming (CGP) neural network (CGPNN). The CGPNN enhances the performance of conventional ANN by allowing training of both its parameters and topology, and it uses a built-in sliding window. We have trained CGPNN with parallel neuroevolution that searches for global optimum through numerous directions. The resource utilization traces of the Bitbrains data center is used for validation of the proposed CGPNN and compared results with machine learning models from the literature on the same data set. The proposed method has outstripped the machine learning models from the literature and resulted in 97% prediction accuracy.


2014 ◽  
Vol 513-517 ◽  
pp. 2031-2034
Author(s):  
Hui Zhang ◽  
Yong Liu

Virtual machine migration is an effective method to improve the resource utilization of cloud data center. The common migration methods use heuristic algorithms to allocation virtual machines, the solution results is easy to fall into local optimal solution. Therefore, an algorithm called Migrating algorithm based on Genetic Algorithm (MGA) is introduced in this paper, which roots from genetic evolution theory to achieve global optimal search in the map of virtual machines to target nodes, and improves the objective function of Genetic Algorithm by setting the resource utilization of virtual machine and target node as an input factor into the calculation process. There is a contrast between MGA, Single Threshold (ST) and Double Threshold (DT) through simulation experiments, the results show that the MGA can effectively reduce migrations times and the number of host machine used.


Energies ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 2071
Author(s):  
Ce Chi ◽  
Kaixuan Ji ◽  
Penglei Song ◽  
Avinab Marahatta ◽  
Shikui Zhang ◽  
...  

The problem of high power consumption in data centers is becoming more and more prominent. In order to improve the energy efficiency of data centers, cooperatively optimizing the energy of IT systems and cooling systems has become an effective way. In this paper, a model-free deep reinforcement learning (DRL)-based joint optimization method MAD3C is developed to overcome the high-dimensional state and action space problems of the data center energy optimization. A hybrid AC-DDPG cooperative multi-agent framework is devised for the improvement of the cooperation between the IT and cooling systems for further energy efficiency improvement. In the framework, a scheduling baseline comparison method is presented to enhance the stability of the framework. Meanwhile, an adaptive score is designed for the architecture in consideration of multi-dimensional resources and resource utilization improvement. Experiments show that our proposed approach can effectively reduce energy for data centers through the cooperative optimization while guaranteeing training stability and improving resource utilization.


Sign in / Sign up

Export Citation Format

Share Document