Resource Utilization-Aware Scheduling Technique Based on Dynamic Cache Refresh Scheme in Large-Scale Cloud Data centers

Author(s):  
HeeSeok Choi ◽  
Jihun Kang ◽  
Daewon Lee ◽  
KwangSik Chung ◽  
Heonchang Yu
Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 690
Author(s):  
Muhammad Ibrahim ◽  
Muhammad Imran ◽  
Faisal Jamil ◽  
Yun Jung Lee ◽  
Do-Hyeun Kim

The rapid demand for Cloud services resulted in the establishment of large-scale Cloud Data Centers (CDCs), which ultimately consume a large amount of energy. An enormous amount of energy consumption eventually leads to high operating costs and carbon emissions. To reduce energy consumption with efficient resource utilization, various dynamic Virtual Machine (VM) consolidation approaches (i.e., Predictive Anti-Correlated Placement Algorithm (PACPA), Resource-Utilization-Aware Energy Efficient (RUAEE), Memory-bound Pre-copy Live Migration (MPLM), m Mixed migration strategy, Memory/disk operation aware Live VM Migration (MLLM), etc.) have been considered. Most of these techniques do aggressive VM consolidation that eventually results in performance degradation of CDCs in terms of resource utilization and energy consumption. In this paper, an Efficient Adaptive Migration Algorithm (EAMA) is proposed for effective migration and placement of VMs on the Physical Machines (PMs) dynamically. The proposed approach has two distinct features: first, selection of PM locations with optimum access delay where the VMs are required to be migrated, and second, reduces the number of VM migrations. Extensive simulation experiments have been conducted using the CloudSim toolkit. The results of the proposed approach are compared with the PACPA and RUAEE algorithms in terms of Service-Level Agreement (SLA) violation, resource utilization, number of hosts shut down, and energy consumption. Results show that proposed EAMA approach significantly reduces the number of migrations by 16% and 24%, SLA violation by 20% and 34%, and increases the resource utilization by 8% to 17% with increased number of hosts shut down from 10% to 13% as compared to the PACPA and RUAEE, respectively. Moreover, a 13% improvement in energy consumption has also been observed.


Author(s):  
Anu Valiyaparambil Raveendran ◽  
Elizabeth Sherly Sherly

In this article, the authors studied hotspots in cloud data centers, which are caused due to a lack of resources to satisfy the peak immediate requests from clients. The nature of resource utilization in cloud data centers are totally dynamic in context and may lead to hotspots. Hotspots are unfavorable situations which cause SLA violations in some scenarios. Here they use trend aware regression (TAR) methods as a load prediction model and perform linear regression analysis to detect the formation of hotspots in physical servers of cloud data centers. This prediction model provides an alarm period for the cloud administrators either to provide enough resources to avoid hotspot situations or perform interference aware virtual machine migration to balance the load on servers. Here they analyzed the physical server resource utilization model in terms of CPU utilization, memory utilization and network bandwidth utilization. In the TAR model, the authors consider the degree of variation between the current points in the prediction window to forecast the future points. The TAR model provides accurate results in its predictions.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 158095-158112
Author(s):  
Wenlong Ni ◽  
Yuhong Zhang ◽  
Wei W. Li

2015 ◽  
Vol 16 (8) ◽  
pp. 942-959
Author(s):  
Yantao Sun ◽  
Min Chen ◽  
Limei Peng ◽  
Mohammad Mehedi Hassan ◽  
Abdulhameed Alelaiwi

In the present situation, it may be essential to build a simple data sharing environment to monitor and protect the unauthorized modification of data. In such case, mechanisms may be required to develop to focus on significant weakened networking with proper solutions. In some situations, block chain data management may be used considering the cloud environment. It is well understood that in virtual environment, allocating resources may have significant role towards evaluating the performance including utilization of resources linked to the data center. Accuracy towards allocation of virtual machines in cloud data centers may be more essential considering the optimization problems in cloud computing. In such cases, it may also be desirable to prioritize on virtual machines linked to cloud data centers. Consolidating the dynamic virtual machines may also permit the virtual server providers to optimize utilization of resources and to focus on energy consumption. In fact, tremendous rise in acquiring computational power driven by modern service applications may be linked towards establishment of large-scale virtualized data centers. Accordingly, the joint collaboration of smart connected devices with data analytics may also enable enormous applications towards different predictive maintenance systems. To obtain the near optimal as well as feasible results in this case, it may be desirable to simulate implementing the algorithms and focusing on application codes. Also, different approaches may also be needed to minimize development time and cost. In many cases, the experimental result proves that the simulation techniques may minimize the cache miss and improve the execution time. In this paper, it has been intended towards distribution of tasks along with implementation mechanisms linked to virtual machines.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2107
Author(s):  
Jaehak Lee ◽  
Heonchang Yu

With the evolution of cloud technology, the number of user applications is increasing, and computational workloads are becoming increasingly diverse and unpredictable. However, cloud data centers still exhibit a low I/O performance because of the scheduling policies employed, which are based on the degree of physical CPU (pCPU) occupancy. Notably, existing scheduling policies cannot guarantee good I/O performance because of the uncertainty of the extent of I/O occurrence and the lack of fine-grained workload classification. To overcome these limitations, we propose ISACS, an I/O strength-aware credit scheduler for virtualized environments. Based on the Credit2 scheduler, ISACS provides a fine-grained workload-aware scheduling technique to mitigate I/O performance degradation in virtualized environments. Further, ISACS uses the event channel mechanism in the virtualization architecture to expand the scope of the scheduling information area and measures the I/O strength of each virtual CPU (vCPU) in the run-queue. Then, ISACS allocates two types of virtual credits for all vCPUs in the run-queue to increase I/O performance and concurrently prevent CPU performance degradation. Finally, through I/O load balancing, ISACS prevents I/O-intensive vCPUs from becoming concentrated on specific cores. Our experiments show that compared with existing virtualization environments, ISACS provides a higher I/O performance with a negligible impact on CPU performance.


2013 ◽  
Vol 24 (5) ◽  
pp. 870-878 ◽  
Author(s):  
Xiaolong Xu ◽  
Jiaxing Wu ◽  
Geng Yang ◽  
Ruchuan Wang

Sign in / Sign up

Export Citation Format

Share Document