scholarly journals I/O Strength-Aware Credit Scheduler for Virtualized Environments

Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2107
Author(s):  
Jaehak Lee ◽  
Heonchang Yu

With the evolution of cloud technology, the number of user applications is increasing, and computational workloads are becoming increasingly diverse and unpredictable. However, cloud data centers still exhibit a low I/O performance because of the scheduling policies employed, which are based on the degree of physical CPU (pCPU) occupancy. Notably, existing scheduling policies cannot guarantee good I/O performance because of the uncertainty of the extent of I/O occurrence and the lack of fine-grained workload classification. To overcome these limitations, we propose ISACS, an I/O strength-aware credit scheduler for virtualized environments. Based on the Credit2 scheduler, ISACS provides a fine-grained workload-aware scheduling technique to mitigate I/O performance degradation in virtualized environments. Further, ISACS uses the event channel mechanism in the virtualization architecture to expand the scope of the scheduling information area and measures the I/O strength of each virtual CPU (vCPU) in the run-queue. Then, ISACS allocates two types of virtual credits for all vCPUs in the run-queue to increase I/O performance and concurrently prevent CPU performance degradation. Finally, through I/O load balancing, ISACS prevents I/O-intensive vCPUs from becoming concentrated on specific cores. Our experiments show that compared with existing virtualization environments, ISACS provides a higher I/O performance with a negligible impact on CPU performance.

2021 ◽  
Vol 11 (3) ◽  
pp. 34-48
Author(s):  
J. K. Jeevitha ◽  
Athisha G.

To scale back the energy consumption, this paper proposed three algorithms: The first one is identifying the load balancing factors and redistribute the load. The second one is finding out the most suitable server to assigning the task to the server, achieved by most efficient first fit algorithm (MEFFA), and the third algorithm is processing the task in the server in an efficient way by energy efficient virtual round robin (EEVRR) scheduling algorithm with FAT tree topology architecture. This EEVRR algorithm improves the quality of service via sending the task scheduling performance and cutting the delay in cloud data centers. It increases the energy efficiency by achieving the quality of service (QOS).


2019 ◽  
Vol 16 (9) ◽  
pp. 3989-3994
Author(s):  
Jaspreet Singh ◽  
Deepali Gupta ◽  
Neha Sharma

Nowadays, Cloud computing is developing quickly and customers are requesting more administrations and superior outcomes. In the cloud domain, load balancing has turned into an extremely intriguing and crucial research area. Numbers of algorithms were recommended to give proficient mechanism for distributing the cloud user’s requests for accessing pool cloud resources. Also load balancing in cloud should provide notable functional benefits to cloud users and at the same time should prove out to be eminent for cloud services providers. In this paper, the pre-existing load balancing techniques are explored. The paper intends to provide landscape for classification of distinct load balancing algorithms based upon the several parameters and also address performance assessment bound to various load balancing algorithms. The comparative assessment of various load balancing algorithms will helps in proposing a competent load balancing technique for intensify the performance of cloud data centers.


Load balancing algorithms and service broker policies plays a crucial role in determining the performance of cloud systems. User response time and data center request servicing time are largely affected by the load balancing algorithms and service broker policies. Several load balancing algorithms and service broker polices exist in the literature to perform the data center allocation and virtual machine allocation for the given set of user requests. In this paper, we investigate the performance of equally spread current execution (ESCE) based load balancing algorithm with closest data center(CDC) service broker policy in a cloud environment that consists of homogeneous and heterogeneous device characteristics in data centers and heterogeneous communication bandwidth that exist between different regions where cloud data centers are deployed. We performed a simulation using CloudAnalyst an open source software with different settings of device characteristics and bandwidth. The user response time and data center request servicing time are found considerably less in heterogeneous environment.


Internet of Things (IoT) and Internet of Mobile Things (IoMT) acquired widespread popularity by its ease of deployment and support for innovative applications. The sensed and aggregated data from IoT and IoMT are transferred to Cloud through Internet for analysis, interpretation and decision making. In order to generate timely response and sending back the decisions to the end users or Administrators, it is important to select appropriate cloud data centers which would process and produce responses in a shorter time. Beside several factors that determine the performance of the integrated 6LOWPAN and Cloud Data Centers, we analyze the available bandwidth between various user bases (IoT and IoMT networks) and the cloud data centers. Amidst of various services offered in cloud, problems such as congestion, delay and poor response time arises when the number of user request increases. Load balancing/sharing algorithms are the popularly used techniques to improve the performance of the cloud system. Load refers to the number of user requests (Data) from different types of networks such as IoT and IoMT which are IPv6 compliant. In this paper we investigate the impact of homogeneous and heterogeneous bandwidth between different regions in load balancing algorithms for mapping user requests (Data) to various virtual machines in Cloud. We investigate the influence of bandwidth across different regions in determining the response time for the corresponding data collected from data harvesting networks. We simulated the cloud environment with various bandwidth values between user base and data centers and presented the average response time for individual user bases. We used Cloud- Analyst an open source tool to simulate the proposed work. The obtained results can be used as a reference to map the mass data generated by various networks to appropriate data centers to produce the response in an optimal time.


Sign in / Sign up

Export Citation Format

Share Document