scholarly journals Complications in Infrastructure as a Service Layer of Cloud and its Solution

Cloud computing shows a vibrant role in existing scenario and the enactment of infrastructure as a service is perilous because of its discrepancy in the area. The cloud users have increased hastily, and the accessibility of resources for the users are less. Infrastructure as a service (IaaS) mentions the particulars of infrastructure like physical computing resources such as stowage, compute, and networking services. IaaS- cloud providers underwrite these resources based on their necessity from their vast content of resources presents any where all over the universe. Observing of these resources continually is precarious. For monitoring the availability of resources and notifying to the users about the resources is one of the challenges taken in Iaas layer is Service Level Agreement (SLA) and provided with a solution. The foremost objective of the scheduling algorithms in a cloud environment is to exploit the resources proficiently while balancing the load between resources, to get the least possible execution time. Hence, rank based task scheduling algorithm is proposed to utilize the resources efficiently and perform high performance. A simulation result gives the Quality of Service (QoS) Parameters such as length (size), CPU, throughput, and bandwidth.

2018 ◽  
pp. 104-106
Author(s):  
Artur Vardanyan

Cluster computing is becoming increasingly practical for high performance computing research and development. A computer cluster is a set of connected computers that work together so that, they can be viewed as a single system. Clusters offer a scalable means of linking computers together to provide an expansive environment for hosting enterprise applications. As the number of nodes in cluster configurations grows, the cluster administration becomes more challenging. We need to study the challenges of cluster management and to provide a solution. To have an effective cluster management we need to have an effective task scheduling algorithm. With the explosive growth of information, the demand on computing is sharply increasing. Due to a large number of computing tasks, the scheduling algorithm is an important part of cluster computing and has a great influence on the quality of claster service. In cluster computing, some large tasks may occupy too many resources and some small tasks may wait for a long time based on First-In-First-Out (FIFO) scheduling algorithm. This paper provides an overview of an improved scheduling algorithm that shortens the execution time of tasks and increases the resource utilization.


2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

Resource allocation and scheduling algorithms are the two essential factors that determine the satisfaction of cloud users. The major cloud resources involved here are servers, storage, network, databases, software and so on based on requirements of customers. In the competitive scenario, each service provider tries to use factors like optimal configuration of resources, pricing, Quality of Service (QoS) parameters and Service Level Agreement (SLA) in order to benefit cloud users and service providers. Since, many researchers have proposed different scheduling algorithms and resource allocation strategies, it becomes a cumbersome task to conclude which ones really benefit customers and service providers. Hence, this paper analyses and presents the most relevant considerations that would help the cloud researchers in achieving their goals in terms of mapping of tasks to cloud resources.


Author(s):  
Ge Weiqing ◽  
Cui Yanru

Background: In order to make up for the shortcomings of the traditional algorithm, Min-Min and Max-Min algorithm are combined on the basis of the traditional genetic algorithm. Methods: In this paper, a new cloud computing task scheduling algorithm is proposed, which introduces Min-Min and Max-Min algorithm to generate initialization population, and selects task completion time and load balancing as double fitness functions, which improves the quality of initialization population, algorithm search ability and convergence speed. Results: The simulation results show that the algorithm is superior to the traditional genetic algorithm and is an effective cloud computing task scheduling algorithm. Conclusion: Finally, this paper proposes the possibility of the fusion of the two quadratively improved algorithms and completes the preliminary fusion of the algorithm, but the simulation results of the new algorithm are not ideal and need to be further studied.


Author(s):  
Amandeep Kaur Sandhu ◽  
Jyoteesh Malhotra

This article describes how a rapid increase in usage of internet has emerged from last few years. This high usage of internet has occurred due to increase in popularity of multimedia applications. However, there is no guarantee of Quality of Service to the users. To fulfill the desired requirements, Internet Service Providers (ISPs) establish a service level agreement (SLA) with clients including specific parameters like bandwidth, reliability, cost, power consumption, etc. ISPs make maximum SLAs and decrease energy consumption to raise their profit. As a result, users do not get the desired services for which they pay. Virtual Software Defined Networks are flexible and manageable networks which can be used to achieve these goals. This article presents shortest path algorithm which improves the matrices like energy consumption, bandwidth usage, successful allocation of nodes in the network using VSDN approach. The results show a 40% increase in the performance of proposed algorithms with a respect to existing algorithms.


Author(s):  
Linlin Wu ◽  
Rajkumar Buyya

In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.


2012 ◽  
Vol 2 (3) ◽  
pp. 86-97
Author(s):  
Veena Goswami ◽  
Sudhansu Shekhar Patra ◽  
G. B. Mund

Cloud computing is a new computing paradigm in which information and computing services can be accessed from a Web browser by clients. Understanding of the characteristics of computer service performance has become critical for service applications in cloud computing. For the commercial success of this new computing paradigm, the ability to deliver guaranteed Quality of Services (QoS) is crucial. Based on the Service level agreement, the requests are processed in the cloud centers in different modes. This paper analyzes a finite-buffer multi-server queuing system where client requests have two arrival modes. It is assumed that each arrival mode is serviced by one or more Virtual machines, and both the modes have equal probabilities of receiving service. Various performance measures are obtained and optimal cost policy is presented with numerical results. The genetic algorithm is employed to search the optimal values of various parameters for the system.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1638 ◽  
Author(s):  
Mohammed A. Alsaih ◽  
Rohaya Latip ◽  
Azizol Abdullah ◽  
Shamala K. Subramaniam ◽  
Kamal Ali Alezabi

A crucial performance concern in distributed decentralized environments, like clouds, is how to guarantee that jobs complete their execution within the estimated completion times using the available resources’ bandwidth fairly and efficiently while considering the resource performance variations. Formerly, several models including reservation, migration, and replication heuristics have been implemented to solve this concern under a variety of scheduling techniques; however, they have some undetermined obstacles. This paper proposes a dynamic job scheduling model (DTSCA) that uses job characteristics to map them to resources with minimum execution time taking into account utilizing the available resources bandwidth fairly to satisfy the cloud users quality of service (QoS) requirements and utilize the providers’ resources efficiently. The scheduling algorithm makes use of job characteristics (length, expected execution time, expected bandwidth) with regards to available symmetrical and non-symmetrical resources characteristics (CPU, memory, and available bandwidth). This scheduling strategy is based on generating an expectation value for each job that is proportional to how these job’s characteristics are related to all other jobs in total. That should make their virtual machine choice closer to their expectation, thus fairer. It also builds a feedback method which deals with reallocation of failed jobs that do not meet the mapping criteria.


2020 ◽  
Vol 178 ◽  
pp. 375-385
Author(s):  
Ismail Zahraddeen Yakubu ◽  
Zainab Aliyu Musa ◽  
Lele Muhammed ◽  
Badamasi Ja’afaru ◽  
Fatima Shittu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document