A multi-output prediction model for physical machine resource usage in cloud data centers

Author(s):  
Yongde Zhang ◽  
Fagui Liu ◽  
Bin Wang ◽  
Weiwei Lin ◽  
Guoxiang Zhong ◽  
...  
Author(s):  
Anu Valiyaparambil Raveendran ◽  
Elizabeth Sherly Sherly

In this article, the authors studied hotspots in cloud data centers, which are caused due to a lack of resources to satisfy the peak immediate requests from clients. The nature of resource utilization in cloud data centers are totally dynamic in context and may lead to hotspots. Hotspots are unfavorable situations which cause SLA violations in some scenarios. Here they use trend aware regression (TAR) methods as a load prediction model and perform linear regression analysis to detect the formation of hotspots in physical servers of cloud data centers. This prediction model provides an alarm period for the cloud administrators either to provide enough resources to avoid hotspot situations or perform interference aware virtual machine migration to balance the load on servers. Here they analyzed the physical server resource utilization model in terms of CPU utilization, memory utilization and network bandwidth utilization. In the TAR model, the authors consider the degree of variation between the current points in the prediction window to forecast the future points. The TAR model provides accurate results in its predictions.


2021 ◽  
pp. 147-157
Author(s):  
Yashwant Singh Patel ◽  
Rishabh Jaiswal ◽  
Savyasachi Pandey ◽  
Rajiv Misra

2019 ◽  
Vol 7 (2) ◽  
pp. 524-536 ◽  
Author(s):  
Fahimeh Farahnakian ◽  
Tapio Pahikkala ◽  
Pasi Liljeberg ◽  
Juha Plosila ◽  
Nguyen Trung Hieu ◽  
...  

Author(s):  
Cail Song ◽  
Bin Liang ◽  
Jiao Li

Recently, the virtual machine deployment algorithm uses physical machine less or consumes higher energy in data centers, resulting in declined service quality of cloud data centers or rising operational costs, which leads to a decrease in cloud service provider’s earnings finally. According to this situation, a resource clustering algorithm for cloud data centers is proposed. This algorithm systematically analyzes the cloud data center model and physical machine’s use ratio, establishes the dynamic resource clustering rules through k-means clustering algorithm, and deploys the virtual machines based on clustering results, so as to promote the use ratio of physical machine and bring down energy consumption in cloud data centers. The experimental results indicate that, regarding the compute-intensive virtual machines in cloud data centers, compared to contrast algorithm, the physical machine’s use ratio of this algorithm is improved by 12% on average, and its energy consumption in cloud data center is lowered by 15% on average. Regarding the general-purpose virtual machines in cloud data center, compared to contrast algorithm, the physical machine’s use ratio is improved by 14% on average, and its energy consumption in cloud data centers is lowered by 12% on average. Above results demonstrate that this method shows a good effect in the resource management of cloud data centers, which may provide reference to some extent.


2019 ◽  
Vol 17 (3) ◽  
pp. 358-366
Author(s):  
Loiy Alsbatin ◽  
Gürcü Öz ◽  
Ali Ulusoy

Further growth of computing performance has been started to be limited due to increasing energy consumption of cloud data centers. Therefore, it is important to pay attention to the resource management. Dynamic virtual machines consolidation is a successful approach to improve the utilization of resources and energy efficiency in cloud environments. Consequently, optimizing the online energy-performance trade off directly influences Quality of Service (QoS). In this paper, a novel approach known as Percentage of Overload Time Fraction Threshold (POTFT) is proposed that decides to migrate a Virtual Machine (VM) if the current Overload Time Fraction (OTF) value of Physical Machine (PM) exceeds the defined percentage of maximum allowed OTF value to avoid exceeding the maximum allowed resulting OTF value after a decision of VM migration or during VM migration. The proposed POTFT algorithm is also combined with VM quiescing to maximize the time until migration, while meeting QoS goal. A number of benchmark PM overload detection algorithms is implemented using different parameters to compare with POTFT with and without VM quiescing. We evaluate the algorithms through simulations with real world workload traces and results show that the proposed approaches outperform the benchmark PM overload detection algorithms. The results also show that proposed approaches lead to better time until migration by keeping average resulting OTF values less than allowed values. Moreover, POTFT algorithm with VM quiescing is able to minimize number of migrations according to QoS requirements and meet OTF constraint with a few quiescings.


Sign in / Sign up

Export Citation Format

Share Document