Cloud Computing in a Distributed Environment Implemented with Networking Technologies

Author(s):  
S. Sai Satyanarayana Reddy
2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


Author(s):  
Punit Gupta ◽  
Ravi Shankar Jha

With increase of information sharing over the internet or intranet, we require techniques to increase the availability of shared resource over large number of users trying to access the resources at the same time. Many techniques are being proposed to make access easy and more secure in distributed environment. Information retrieval plays an important to serve the most reliant data in least waiting, this chapter discuses all such techniques for information retrieval and sharing over the cloud infrastructure. Cloud Computing services provide better performance in terms of resource sharing and resource access with high reliability and scalability under high load.


Author(s):  
Santanu Dam ◽  
Gopa Mandal ◽  
Kousik Dasgupta ◽  
Parmartha Dutta

This book chapter proposes use of Ant Colony Optimization (ACO), a novel computational intelligence technique for balancing loads of virtual machine in cloud computing. Computational intelligence(CI), includes study of designing bio-inspired artificial agents for finding out probable optimal solution. So the central goal of CI can be said as, basic understanding of the principal, which helps to mimic intelligent behavior from the nature for artifact systems. Basic strands of ACO is to design an intelligent multi-agent systems imputed by the collective behavior of ants. From the perspective of operation research, it's a meta-heuristic. Cloud computing is a one of the emerging technology. It's enables applications to run on virtualized resources over the distributed environment. Despite these still some problems need to be take care, which includes load balancing. The proposed algorithm tries to balance loads and optimize the response time by distributing dynamic workload in to the entire system evenly.


The cloud/utility computing model requires a dynamic task assignment to cloud sites with the goal that the performance and demand handling is done as effectively as would be prudent. Efficient load balancing and proper allocation of resources are vital systems to improve the execution of different services and make legitimate usage of existing assets in the cloud computing atmosphere. Consequently, the cloud-based infrastructure has numerous kinds of load concerns such as CPU load, server load, memory drain, network load, etc. Thus, an appropriate load balancing system helps in realizing failures, reducing backlog problems, adaptability, proper resource distribution, expanding dependability and client fulfillment and so forth in distributed environment. This thesis reviewed various popular load balancing algorithms. Modified round robin algorithms are popularly employed by various giant companies for scheduling issues and load balancing. An enhanced weighted round robin algorithm is discussed in this paper concentrating on efficient load balancing and effective task scheduling and resource management.


Author(s):  
Dr. Manish Jivtode

Cloud computing is viewed as one of the most promising technologies in computing today. This is a new concept of large scale distributed computing. It provides an open platform for every user on the pay-per-use basis. Cloud computing provides number of interfaces and APIs to interact with the services provided to the users. With the development of web services distributed application, Security of data is another important subject in various layers of distributed computing. In this study, security of data that can be used during the access of distributed environment over various layers will be described.


2015 ◽  
pp. 1702-1720
Author(s):  
Yoshito Kanamori ◽  
Minnie Yi-Miin Yen

Cloud computing is changing the way corporate computing operates and forcing the rapid evolution of computing service delivery. It is being facilitated by numerous technological approaches and a variety of business models. Although utilizing the infrastructure of existing computing and networking technologies, different cloud service providers (CSPs) are able to unite their efforts and address a much broader business space. As a result, confusion has emerged and questions have risen from both Information Technology (IT) and business communities. How cloud environments differ from traditional models, and how these differences affect their adoption are of major importance. In this chapter, the authors first clarify misperceptions by introducing the new threats and challenges involved in cloud environments. Specifically, security issues and concerns will be depicted in three practical scenarios designed to illuminate the different security problems in each cloud deployment model. The chapter also further discusses how to assess and control the concerns and issues pertaining to the security and risk management implementations.


The paper presents a model of computational workflows based on end-user understanding and provides an overview of various computational architectures, such as computing cluster, Grid, Cloud Computing, and SOA, for building workflows in a distributed environment. A comparative analysis of the capabilities of the architectures for the implementation of computational workflows have been shown that the workflows should be implemented based on SOA, since it meets all the requirements for the basic infrastructure and provides a high degree of compute nodes distribution, as well as their migration and integration with other systems in a heterogeneous environment. The Cloud Computing architecture using may be efficient when building a basic information infrastructure for the organization of distributed high-performance computing, since it supports the general and coordinated usage of dynamically allocated distributed resources, allows in geographically dispersed data centers to create and virtualize high-performance computing systems that are able to independently support the necessary QoS level and, if necessary, to use the Software as a Service (SaaS) model for end-users. The advantages of the Cloud Computing architecture do not allow the end user to realize business processes design automatically, designing them "on the fly". At the same time, there is the obvious need to create semantically oriented computing workflows based on a service-oriented architecture using a microservices approach, ontologies and metadata structures, which will allow to create workflows “on the fly” in accordance with the current request requirements.


2021 ◽  
Author(s):  
◽  
Kyle Chard

<p>The computational landscape is littered with islands of disjoint resource providers including commercial Clouds, private Clouds, national Grids, institutional Grids, clusters, and data centers. These providers are independent and isolated due to a lack of communication and coordination, they are also often proprietary without standardised interfaces, protocols, or execution environments. The lack of standardisation and global transparency has the effect of binding consumers to individual providers. With the increasing ubiquity of computation providers there is an opportunity to create federated architectures that span both Grid and Cloud computing providers effectively creating a global computing infrastructure. In order to realise this vision, secure and scalable mechanisms to coordinate resource access are required. This thesis proposes a generic meta-scheduling architecture to facilitate federated resource allocation in which users can provision resources from a range of heterogeneous (service) providers. Efficient resource allocation is difficult in large scale distributed environments due to the inherent lack of centralised control. In a Grid model, local resource managers govern access to a pool of resources within a single administrative domain but have only a local view of the Grid and are unable to collaborate when allocating jobs. Meta-schedulers act at a higher level able to submit jobs to multiple resource managers, however they are most often deployed on a per-client basis and are therefore concerned with only their allocations, essentially competing against one another. In a federated environment the widespread adoption of utility computing models seen in commercial Cloud providers has re-motivated the need for economically aware meta-schedulers. Economies provide a way to represent the different goals and strategies that exist in a competitive distributed environment. The use of economic allocation principles effectively creates an open service market that provides efficient allocation and incentives for participation. The major contributions of this thesis are the architecture and prototype implementation of the DRIVE meta-scheduler. DRIVE is a Virtual Organisation (VO) based distributed economic metascheduler in which members of the VO collaboratively allocate services or resources. Providers joining the VO contribute obligation services to the VO. These contributed services are in effect membership “dues” and are used in the running of the VOs operations – for example allocation, advertising, and general management. DRIVE is independent from a particular class of provider (Service, Grid, or Cloud) or specific economic protocol. This independence enables allocation in federated environments composed of heterogeneous providers in vastly different scenarios. Protocol independence facilitates the use of arbitrary protocols based on specific requirements and infrastructural availability. For instance, within a single organisation where internal trust exists, users can achieve maximum allocation performance by choosing a simple economic protocol. In a global utility Grid no such trust exists. The same meta-scheduler architecture can be used with a secure protocol which ensures the allocation is carried out fairly in the absence of trust. DRIVE establishes contracts between participants as the result of allocation. A contract describes individual requirements and obligations of each party. A unique two stage contract negotiation protocol is used to minimise the effect of allocation latency. In addition due to the co-op nature of the architecture and the use of secure privacy preserving protocols, DRIVE can be deployed in a distributed environment without requiring large scale dedicated resources. This thesis presents several other contributions related to meta-scheduling and open service markets. To overcome the perceived performance limitations of economic systems four high utilisation strategies have been developed and evaluated. Each strategy is shown to improve occupancy, utilisation and profit using synthetic workloads based on a production Grid trace. The gRAVI service wrapping toolkit is presented to address the difficulty web enabling existing applications. The gRAVI toolkit has been extended for this thesis such that it creates economically aware (DRIVE-enabled) services that can be transparently traded in a DRIVE market without requiring developer input. The final contribution of this thesis is the definition and architecture of a Social Cloud – a dynamic Cloud computing infrastructure composed of virtualised resources contributed by members of a Social network. The Social Cloud prototype is based on DRIVE and highlights the ease in which dynamic DRIVE markets can be created and used in different domains.</p>


Author(s):  
Sriperambuduri Vinay Kumar ◽  
◽  
M. Nagaratna ◽  

Cloud computing model has evolved to deliver resources on pay per use model to businesses, service providers and end-users. Workflow scheduling has become one of the research trends in cloud computing as many applications in scientific, business, and big data processing can be expressed in the form of a workflow. The scheduling aims to execute scientific or synthetic workloads on the cloud by utilizing the resources by meeting QoS requirements, makespan, energy and cost. There has been extensive research in this area to schedule workflow applications in a distributed environment, to execute background tasks in IoT applications, event-driven and web applications. This paper focuses on the comprehensive survey and classification of workflow scheduling algorithms designed for the cloud.


Sign in / Sign up

Export Citation Format

Share Document