scholarly journals Comparison of Algorithms for W orkflow Applications in C loud C omputing

Author(s):  
Sriperambuduri Vinay Kumar ◽  
◽  
M. Nagaratna ◽  

Cloud computing model has evolved to deliver resources on pay per use model to businesses, service providers and end-users. Workflow scheduling has become one of the research trends in cloud computing as many applications in scientific, business, and big data processing can be expressed in the form of a workflow. The scheduling aims to execute scientific or synthetic workloads on the cloud by utilizing the resources by meeting QoS requirements, makespan, energy and cost. There has been extensive research in this area to schedule workflow applications in a distributed environment, to execute background tasks in IoT applications, event-driven and web applications. This paper focuses on the comprehensive survey and classification of workflow scheduling algorithms designed for the cloud.

2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


The cloud/utility computing model requires a dynamic task assignment to cloud sites with the goal that the performance and demand handling is done as effectively as would be prudent. Efficient load balancing and proper allocation of resources are vital systems to improve the execution of different services and make legitimate usage of existing assets in the cloud computing atmosphere. Consequently, the cloud-based infrastructure has numerous kinds of load concerns such as CPU load, server load, memory drain, network load, etc. Thus, an appropriate load balancing system helps in realizing failures, reducing backlog problems, adaptability, proper resource distribution, expanding dependability and client fulfillment and so forth in distributed environment. This thesis reviewed various popular load balancing algorithms. Modified round robin algorithms are popularly employed by various giant companies for scheduling issues and load balancing. An enhanced weighted round robin algorithm is discussed in this paper concentrating on efficient load balancing and effective task scheduling and resource management.


Author(s):  
N. Krishnadas ◽  
R. Radhakrishna Pillai

Cloud Computing is emerging as a promising cost efficient computing paradigm which professionals believe is an absolutely new trend and will represent next level of internet evolution. Though the presence of Cloud computing is ubiquitous, it still lacks consensus on a proper definition and classification of the major Clouds in effect today. It also suffers from major criticism of being a hype/fad and some researchers claim that it is just an extension of already established computing paradigms. This chapter attempts to deal with such criticisms by comprehensively analyzing the Cloud definitions and diagnose the components of the same. It performs a comprehensive study of more than 30 definitions given by Cloud computing professionals and published in research papers. These definitions are then analyzed under more than fifteen components, each of which is discussed in the chapter. This study is backed by empirical work, to understand Cloud computing from different angles and come up with a comprehensive definition. It also analyses the present Cloud service providers and the level of services they provide to bring about a clear picture of Cloud computing. Based on the comparison, the pending issues in Cloud computing are discussed.


2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


2014 ◽  
Vol 3 (2) ◽  
pp. 55-62 ◽  
Author(s):  
Arezoo Jahani ◽  
Leyli Mohammad Khanli ◽  
Seyed Naser Razavi

Cloud computing is a kind of computing model that promise accessing to information resources in request time and subscription basis. In this environment, there are different type of user’s application with different requirements. In addition, there are different cloud Service providers which present spate services with various qualitative traits. Therefore determining the best cloud computing service for users with specific applications is a serious problem. Service ranking system compares the different services based on quality of services (QoS), in order to select the most appropriate service. In this paper, we propose a W_SR (Weight Service Rank) approach for cloud service ranking that uses from QoS features. Comprehensive experiments are conducted employing real-world QoS dataset, including more than 2500 web services over the world. The experimental results show that execution time of our approach is less than other approaches and it is more flexible and scalable than the others with increase in services or users.


2020 ◽  
Vol 26 (6) ◽  
pp. 40-51
Author(s):  
Muhammad Faraz Manzoor ◽  
Adnan Abid ◽  
Muhammad Shoaib Farooq ◽  
Naeem A. Azam ◽  
Uzma Farooq

Cloud computing has become a very important computing model to process data and execute computationally concentrated applications in pay-per-use method. Resource allocation is a process in which the resources are allocated to consumers by cloud providers based on their flexible requirements. As the data is expanding every day, allocating resources efficiently according to the consumer demand has also become very important, keeping Service Level Agreement (SLA) between service providers and consumers in prospect. This task of resource allocation becomes more challenging due to finite available resources and increasing consumer demands. Therefore, many unique models and techniques have been proposed to allocate resources efficiently. In the light of the uniqueness of the models and techniques, the main aim of the resource allocation is to limit the overhead/expenses associated with it. This research aims to present a comprehensive, structured literature review on different aspects of resource allocation in cloud computing, including strategic, target resources, optimization, scheduling and power. More than 50 articles, between year 2007 and 2019, related to resource allocation in cloud computing have been shortlisted through a structured mechanism and they are reviewed under clearly defined objectives. It presents a topical taxonomy of resource allocation dimensions, and articles under each category are discussed and analysed. Lastly, salient future directions in this area are discussed.


2015 ◽  
Vol 14 (10) ◽  
pp. 6176-6183
Author(s):  
S.J. Mohana ◽  
Dr.M. Saroja ◽  
Dr.M. Venkatachalam

Cloud computing is a type of parallel and distributed system consisting of a collection of interconnected and virtual computers. This technological trend has enabled the realization of a new computing model called cloud computing, in which shared resources, information,software & other devices are provided according to client requirement at specific time, are provided as general utilities that can be leased and released by users through the Internet in an on-demand fashion.Cloud workflow scheduling is an NP-hard optimization problem, and many meta-heuristic algorithms have been proposed to solve it.Allocation of resources to a large number of workflows in a cloud computing environment presents more difficulty than in network computational environments.A good task scheduler should adapt its scheduling strategy to the changing environment and the types of tasks. In this work, modified ant colony optimization for cloud task scheduling is proposed. The goal of modification is to enhance the performance of the basic ant colony optimization algorithm and optimize the task execution time in view of minimizing the makespan of a given tasks set.


Author(s):  
Savo Stupar ◽  
Mirha Bičo Ćar ◽  
Elvir Šahić

Cloud computing is a new technology that represents the realization of an old idea that computer data processing is executed and charged as a service. The goal of this chapter is to define the basics of the cloud computing concept and how this technology works, and then explain where is all the data that cloud computing uses, how they are distributed, how and to what extent are available, who are the ultimate beneficiaries, what are the advantages and the disadvantages of applying this concept. This cloud computing model consists of five essential features, three service models, and four application models, and the elaboration of these concepts will be an integral part of this chapter. Particular attention will be paid to the possibility of data abuse in cloud computing and regarding that data protection against manipulation from service providers as well as the financial aspect of cloud computing.


Rapid growth in the use of cloud base service comes up with so many challenges. Service providers of cloud base service looking for the best solution to these challenges. Users of service sends the request for a particular task so at a time number of request sends by different users for different task, this all task need to be complete in a particular time is one of the biggest challenges in cloud computing It is defined as a workflow scheduling. Parallel other key points are better utilization of resources, reduce make span time, energy-saving and many more consider as QoS parameters. Researchers conduct so many Qos parameter oriented research to make it efficient The Aim of this review paper is to provide systemic review and better understating of ongoing research on different approaches and service providers always looking for better solutions of challenges to maintain service level.


Author(s):  
Dinesh Kumar, Dr Sunil Kumar

Distributed computing is the most recent developing pattern in disseminated processing that conveys equipment framework and programming applications as administrations. The clients can devour these administrations dependent on a SLA which characterizes their required QoS parameters. By using the cloud computing technique it is possible to reduce the investment on various resources like computer hardware and software. The application or processes that are hosted and executed using clouds consist of set of tasks and it is considered that this task will form the workflow. Therefore scheduling the task is considered as a major issue as resource usage has to be maximized without affecting the services that are facilitated by the cloud. In order to execute different virtual machine application of the tasks are assigned and it is termed as enterprise arranging. In the scheduling process the inter-dependent tasks are mapped and managed in the distributed resources. For additional improvement, this paper proposes a hybrid optimization algorithm for workflow scheduling (HOWS) in cloud environment. In the proposed algorithm the first contribution is the bees mating optimization (BMO) algorithm used to share physical infrastructure to enable multiple service providers to optimize scheduling. The second contribution in the proposed algorithm is the bacterial evolutionary algorithm used to flexible access of the resources in order to optimize the network resources. By combining the hybrid optimization algorithm provides the better improvement in terms of task scheduling and optimal resource allocation. The result and performance analysis shows that the proposed technique performs very efficient in terms of energy efficiency and scalability without compromising security. The performance is obtained using cloudSim tool


Sign in / Sign up

Export Citation Format

Share Document