scholarly journals Review of the quality of service scheduling mechanisms in cloud

2018 ◽  
Vol 7 (3) ◽  
pp. 1677 ◽  
Author(s):  
K R RemeshBabu ◽  
Philip Samuel

Cloud computing provides on demand access to a large pool of heterogeneous computational and storage resources to users over the internet. Optimal scheduling mechanisms are needed for the efficient management of these heterogeneous resources. The optimal scheduler can improve the Quality of Services (QoS) as well as maintaining efficiency and fairness among these tasks. In large scale distributed systems, the performance of these scheduling algorithms is crucial for better efficiency. Now the cloud customers are charged based upon the amount of resources they are consumed or held in reserve. Comparing these scheduling algorithms from different perspectives is needed for further improvement. This paper provides a comparative study about different resource allocation, load balancing and virtual machine consolidation algorithms in cloud computing. These algorithms have been evaluated in terms of their ability to provide QoS for the tasks and Service Level Agreement (SLA) guarantee amongst the jobs served. This study identifies current and future research directions in this area for QoS enabled cloud scheduling.  

Author(s):  
Shailendra Raghuvanshi ◽  
Priyanka Dubey

Cloud Computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, applications and services) that can be rapidly provisioned and released. Resource Provisioning means the selection, deployment, and run-time management of software (e.g., database server management systems, load balancers) and hardware resources (e.g., CPU, storage, and network) for ensuring guaranteed performance for applications. Resource Provisioning is an important and challenging problem in the large-scale distributed systems such as Cloud computing environments. There are many resource provisioning techniques, both static and dynamic each one having its own advantages and also some challenges. These resource provisioning techniques used must meet Quality of Service (QoS) parameters like availability, throughput, response time, security, reliability etc., and thereby avoiding Service Level Agreement (SLA) violation. In this paper, survey on Static and Dynamic Resource Provisioning Techniques is made.


Author(s):  
Linlin Wu ◽  
Rajkumar Buyya

In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.


2012 ◽  
Vol 2 (3) ◽  
pp. 86-97
Author(s):  
Veena Goswami ◽  
Sudhansu Shekhar Patra ◽  
G. B. Mund

Cloud computing is a new computing paradigm in which information and computing services can be accessed from a Web browser by clients. Understanding of the characteristics of computer service performance has become critical for service applications in cloud computing. For the commercial success of this new computing paradigm, the ability to deliver guaranteed Quality of Services (QoS) is crucial. Based on the Service level agreement, the requests are processed in the cloud centers in different modes. This paper analyzes a finite-buffer multi-server queuing system where client requests have two arrival modes. It is assumed that each arrival mode is serviced by one or more Virtual machines, and both the modes have equal probabilities of receiving service. Various performance measures are obtained and optimal cost policy is presented with numerical results. The genetic algorithm is employed to search the optimal values of various parameters for the system.


2020 ◽  
Vol 178 ◽  
pp. 375-385
Author(s):  
Ismail Zahraddeen Yakubu ◽  
Zainab Aliyu Musa ◽  
Lele Muhammed ◽  
Badamasi Ja’afaru ◽  
Fatima Shittu ◽  
...  

2020 ◽  
Vol 8 (1) ◽  
pp. 65-81 ◽  
Author(s):  
Pradeep Kumar Tiwari ◽  
Sandeep Joshi

It has already been proven that VMs are over-utilized in the initial stages and are underutilized in the later stages. Due to the random utilization of the CPU, resources are sometimes heavily loaded whereas other resources are idle. Load imbalance causes service level agreement (SLA) violations resulting in poor quality of service (QoS) aided by the imperfect management of resources. An effective load balancing mechanism helps to achieve balanced utilization, which maximizes the throughput, availability, and reliability and reduces the response and migration time. The proposed algorithm can effectively minimize the response and the migration time and maximize reliability, and throughput. This research also helps to understand the load balancing policies and analysis of other research works.


Electronics ◽  
2018 ◽  
Vol 7 (11) ◽  
pp. 309 ◽  
Author(s):  
Hind Bangui ◽  
Said Rakrak ◽  
Said Raghay ◽  
Barbora Buhnova

Cloud computing has significantly enhanced the growth of the Internet of Things (IoT) by ensuring and supporting the Quality of Service (QoS) of IoT applications. However, cloud services are still far from IoT devices. Notably, the transmission of IoT data experiences network issues, such as high latency. In this case, the cloud platforms cannot satisfy the IoT applications that require real-time response. Yet, the location of cloud services is one of the challenges encountered in the evolution of the IoT paradigm. Recently, edge cloud computing has been proposed to bring cloud services closer to the IoT end-users, becoming a promising paradigm whose pitfalls and challenges are not yet well understood. This paper aims at presenting the leading-edge computing concerning the movement of services from centralized cloud platforms to decentralized platforms, and examines the issues and challenges introduced by these highly distributed environments, to support engineers and researchers who might benefit from this transition.


2021 ◽  
Author(s):  
Hiren Kumar Deva Sarma

<p>Quality of Service (QoS) is one of the most important parameters to be considered in computer networking and communication. The traditional network incorporates various quality QoS frameworks to enhance the quality of services. Due to the distributed nature of the traditional networks, providing quality of service, based on service level agreement (SLA) is a complex task for the network designers and administrators. With the advent of software defined networks (SDN), the task of ensuring QoS is expected to become feasible. Since SDN has logically centralized architecture, it may be able to provide QoS, which was otherwise extremely difficult in traditional network architectures. Emergence and popularity of machine learning (ML) and deep learning (DL) have opened up even more possibilities in the line of QoS assurance. In this article, the focus has been mainly on machine learning and deep learning based QoS aware protocols that have been developed so far for SDN. The functional areas of SDN namely traffic classification, QoS aware routing, queuing, and scheduling are considered in this survey. The article presents a systematic and comprehensive study on different ML and DL based approaches designed to improve overall QoS in SDN. Different research issues & challenges, and future research directions in the area of QoS in SDN are outlined. <b></b></p>


Author(s):  
V. Goswami ◽  
S. S. Patra ◽  
G. B. Mund

In Cloud Computing, the virtualization of IT infrastructure enables consolidation and pooling of IT resources so they are shared over diverse applications to offset the limitation of shrinking resources and growing business needs. Cloud Computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology's existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. For the commercial success of this new computing paradigm, the ability to deliver guaranteed Quality of Services is crucial. Based on the Service Level Agreement, the requests are processed in the cloud centers in different modes. This chapter deals with Quality of Services and optimal management of cloud centers with different arrival modes. For this purpose, the authors consider a finite-buffer multi-server queuing system where client requests have different arrival modes. It is assumed that each arrival mode is serviced by one or more virtual machines, and different modes have equal probabilities of receiving services. Various performance measures are obtained and optimal cost policy is presented with numerical results. A genetic algorithm is employed to search optimal values of various parameters for the system.


2019 ◽  
Vol 8 (3) ◽  
pp. 1457-1462

Cloud computing technology has gained the attention of researchers in recent years. Almost every application is using cloud computing in one way or another. Virtualization allows running many virtual machines on a single physical computer by sharing its resources. Users can store their data on datacenter and run their applications from anywhere using the internet and pay as per service level agreement documents accordingly. It leads to an increase in demand for cloud services and may decrease the quality of service. This paper presents a priority-based selection of virtual machines by cloud service provider. The virtual machines in the cloud datacenter are configured as Amazon EC2 and algorithm is simulated in cloud-sim simulator. The results justify that proposed priority-based virtual machine algorithm shortens the makespan, by 11.43 % and 5.81 %, average waiting time by 28.80 % and 24.50%, and cost of using the virtual machine by 21.24% and 11.54% as compared to FCFS and ACO respectively, hence improving quality of service.


2020 ◽  
Vol 17 (11) ◽  
pp. 5003-5009
Author(s):  
Puneet Banga ◽  
Sanjeev Rana

Due to constraints along with profit margins in background, service provider’s sometime neglect to feed essential services to their respective clients. Such compulsion raises the demand for efficient task scheduling that can meet multiple objectives. But without any prior agreement, again makes a casual approach. So this dispute can be addressed when competent scheduling executes right over the Service Level Agreement. It acts as hotspots to define set of rules to assure quality of service. At this time, there is a huge demand of SLA opted scheduling that can produce profitable results from provider’s and client’s as well. This article presents a fundamental approach that can be applied to existing scheduling techniques on the fly. Result shows drastic improvement in terms of average waiting time, average turnaround time without comprising provider’s cost margin at all along with fairness.


Sign in / Sign up

Export Citation Format

Share Document