scholarly journals DRIVE: A Distributed Economic  Meta-Scheduler for the Federation  of Grid and Cloud Systems

2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>

2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


Data ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 38 ◽  
Author(s):  
Altaf Hussain ◽  
Muhammad Aleem

Developers of resource-allocation and scheduling algorithms share test datasets (i.e., benchmarks) to enable others to compare the performance of newly developed algorithms. However, mostly it is hard to acquire real cloud datasets due to the users’ data confidentiality issues and policies maintained by Cloud Service Providers (CSP). Accessibility of large-scale test datasets, depicting the realistic high-performance computing requirements of cloud users, is very limited. Therefore, the publicly available real cloud dataset will significantly encourage other researchers to compare and benchmark their applications using an open-source benchmark. To meet these objectives, the contemporary state of the art has been scrutinized to explore a real workload behavior in Google cluster traces. Starting from smaller- to moderate-size cloud computing infrastructures, the dataset generation process is demonstrated using the Monte Carlo simulation method to produce a Google Cloud Jobs (GoCJ) dataset based on the analysis of Google cluster traces. With this article, the dataset is made publicly available to enable other researchers in the field to investigate and benchmark their scheduling and resource-allocation schemes for the cloud. The GoCJ dataset is archived and available on the Mendeley Data repository.


Author(s):  
Olexander Melnikov ◽  
◽  
Konstantin Petrov ◽  
Igor Kobzev ◽  
Viktor Kosenko ◽  
...  

The article considers the development and implementation of cloud services in the work of government agencies. The classification of the choice of cloud service providers is offered, which can serve as a basis for decision making. The basics of cloud computing technology are analyzed. The COVID-19 pandemic has identified the benefits of cloud services in remote work Government agencies at all levels need to move to cloud infrastructure. Analyze the prospects of cloud computing in Ukraine as the basis of e-governance in development. This is necessary for the rapid provision of quality services, flexible, large-scale and economical technological base. The transfer of electronic information interaction in the cloud makes it possible to attract a wide range of users with relatively low material costs. Automation of processes and their transfer to the cloud environment make it possible to speed up the process of providing services, as well as provide citizens with minimal time to obtain certain information. The article also lists the risks that exist in the transition to cloud services and the shortcomings that may arise in the process of using them.


2021 ◽  
Vol 12 (1) ◽  
pp. 74-83
Author(s):  
Manjunatha S. ◽  
Suresh L.

Data center is a cost-effective infrastructure for storing large volumes of data and hosting large-scale service applications. Cloud computing service providers are rapidly deploying data centers across the world with a huge number of servers and switches. These data centers consume significant amounts of energy, contributing to high operational costs. Thus, optimizing the energy consumption of servers and networks in data centers can reduce operational costs. In a data center, power consumption is mainly due to servers, networking devices, and cooling systems, and an effective energy-saving strategy is to consolidate the computation and communication into a smaller number of servers and network devices and then power off as many unneeded servers and network devices as possible.


Author(s):  
Rosiah Ho

Cloud Computing is a prevalent issue for organizations nowadays. Different service providers are starting to roll out their Cloud services to organizations in both commercial and industrial sectors. As for an enterprise, the basic value proposition of Cloud Computing includes but not limit to the outsourcing of the in-house computing infrastructure without capitalizing their investment to build and maintain these infrastructures. Challenges have never been ceased for striking a balance between Cloud deployment and the need to meet the continual rise in demand for computing resources. It becomes a strategic tool to increase the competitive advantage and to survival in the market for an enterprise. To reconcile this conflict, IT leaders must find a new IT operating model which can enhance business agility, scalability, and shifts away from traditional capital-intensive IT investments.


2015 ◽  
pp. 749-781
Author(s):  
João Barreto ◽  
Pierangelo Di Sanzo ◽  
Roberto Palmieri ◽  
Paolo Romano

By shifting data and computation away from local servers towards very large scale, world-wide spread data centers, Cloud Computing promises very compelling benefits for both cloud consumers and cloud service providers: freeing corporations from large IT capital investments via usage-based pricing schemes, drastically lowering barriers to entry and capital costs; leveraging the economies of scale for both services providers and users of the cloud; facilitating deployment of services; attaining unprecedented scalability levels. However, the promise of infinite scalability catalyzing much of the recent hype about Cloud Computing is still menaced by one major pitfall: the lack of programming paradigms and abstractions capable of bringing the power of parallel programming into the hands of ordinary programmers. This chapter describes Cloud-TM, a self-optimizing middleware platform aimed at simplifying the development and administration of applications deployed on large scale Cloud Computing infrastructures.


Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5706
Author(s):  
Muhammad Shuaib Qureshi ◽  
Muhammad Bilal Qureshi ◽  
Muhammad Fayaz ◽  
Muhammad Zakarya ◽  
Sheraz Aslam ◽  
...  

Cloud computing is the de facto platform for deploying resource- and data-intensive real-time applications due to the collaboration of large scale resources operating in cross-administrative domains. For example, real-time systems are generated by smart devices (e.g., sensors in smart homes that monitor surroundings in real-time, security cameras that produce video streams in real-time, cloud gaming, social media streams, etc.). Such low-end devices form a microgrid which has low computational and storage capacity and hence offload data unto the cloud for processing. Cloud computing still lacks mature time-oriented scheduling and resource allocation strategies which thoroughly deliberate stringent QoS. Traditional approaches are sufficient only when applications have real-time and data constraints, and cloud storage resources are located with computational resources where the data are locally available for task execution. Such approaches mainly focus on resource provision and latency, and are prone to missing deadlines during tasks execution due to the urgency of the tasks and limited user budget constraints. The timing and data requirements exacerbate the efficient task scheduling and resource allocation problems. To cope with the aforementioned gaps, we propose a time- and cost-efficient resource allocation strategy for smart systems that periodically offload computational and data-intensive load to the cloud. The proposed strategy minimizes the data files transfer overhead to computing resources by selecting appropriate pairs of computing and storage resources. The celebrated results show the effectiveness of the proposed technique in terms of resource selection and tasks processing within time and budget constraints when compared with the other counterparts.


Author(s):  
Manasa Jonnagadla

Abstract: Cloud computing provides streamlined tools for exceptional business efficiency. Cloud service providers typically offer two types of plans: reserved and on-demand. Restricted policies provide low-cost long-term contracting, while order contracts are expensive and ready for short periods. Cloud resources must be delivered wisely to meet current customer demands. Many current works rely on low-cost resource-reserved strategies, which may be under- or over-provisioning. Resource allocation has become a difficult issue due to unfairness causing high availability costs and cloud demand variability. That article suggests a hybrid approach to allocating cloud services to complex customer orders. The strategy was built in two stages: accommodation stages and a flexible structure. By treating each step as an optimization problem, we can reduce the overall implementation cost while maintaining service quality. Due to the uncertain nature of cloud requests, we set up a stochastic Optimization-based approach. Our technique is used to assign individual cloud resources and the results show its effectiveness. Keywords: Cloud computing, Resource allocation, Demand


Author(s):  
Dr. Manish Jivtode

Cloud computing is viewed as one of the most promising technologies in computing today. This is a new concept of large scale distributed computing. It provides an open platform for every user on the pay-per-use basis. Cloud computing provides number of interfaces and APIs to interact with the services provided to the users. With the development of web services distributed application, Security of data is another important subject in various layers of distributed computing. In this study, security of data that can be used during the access of distributed environment over various layers will be described.


Sign in / Sign up

Export Citation Format

Share Document