Advances in Systems Analysis, Software Engineering, and High Performance Computing - Developing Interoperable and Federated Cloud Architecture
Latest Publications


TOTAL DOCUMENTS

11
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781522501534, 9781522501541

Author(s):  
Tamas Pflanzner ◽  
Roland Tornyai ◽  
Ákos Zoltán Gorácz ◽  
Attila Kertesz

Cloud Computing has opened new ways of flexible resource provisions for businesses migrating IT applications and data to the cloud to respond to new demands from customers. Recently, many businesses plan to take advantage of the flexible resource provision. Cloud Federations envisage a distributed, heterogeneous environment consisting of various cloud infrastructures by aggregating different IaaS provider capabilities coming from both the commercial and academic area. Recent solutions hide the diversity of multiple clouds and form a unified federation on top of them. Many approaches follow recent trends in cloud application development, and offer federation capabilities at the platform level, thus creating Platform-as-a-Service solutions. In this chapter the authors investigate capabilities of PaaS solutions and present a classification of these tools: what levels of developer experience they offer, what types of APIs, developer tools they support and what web GUIs they provide. Developer experience is measured by creating and executing sample applications with these PaaS tools.


Author(s):  
Simon Ostermann ◽  
Gabor Kecskemeti ◽  
Salman Taherizadah ◽  
Radu Prodan ◽  
Thomas Fahringer ◽  
...  

ENTICE is an H2020 European project aiming to research and create a novel Virtual Machine (VM) repository and operational environment for federated Cloud infrastructures to: (i) simplify the creation of lightweight and highly optimised VM images tuned for functional descriptions of applications; (ii) automatically decompose and distribute VM images based on multi-objective optimisation (performance, economic costs, storage size, and QoS needs) and a knowledge base and reasoning infrastructure to meet application runtime requirements; and (iii) elastically auto-scale applications on Cloud resources based on their fluctuating load with optimised VM interoperability across Cloud infrastructures and without provider lock-in, in order to finally fulfil the promises that virtualization technology has failed to deliver so far. In this chapter, we give an inside view into the ENTICE project architecture. Based on stakeholders that interact with ENTICE, we describe the different functionalities of the different components and services and how they interact with each other.


Author(s):  
Szilvia Varadi

Cloud Computing is a diverse research area that encompasses many aspects of sharing software and hardware solutions, including computing and storage resources, application runtimes or complex application functionalities. In the supply of any goods and services, the law gives certain rights that protect the consumer and provider, which also applies for Cloud Computing. This new technology also moves functions and responsibilities away from local ownership and management to a third-party provided service, and raises several legal issues, such as data protection, which require this service to comply with necessary regulation. In this chapter the author investigates the revised legislation of the European Union resulting in the General Data Protection Regulation, which will be used to set up the new European Data Protection Framework. The author gathers and summarizes the most relevant changes this regulation brings to the field of Clouds, and draws relations to the previous legislation called the Data Protection Directive currently in force.


Author(s):  
Manoj V. Thomas ◽  
K. Chandrasekaran

Nowadays, the issue of identity and access management (IAM) has become an important research topic in cloud computing. In the distributed computing environments like cloud computing, effective authentication and authorization are essential to make sure that unauthorized users do not access the resources, thereby ensuring the confidentiality, integrity, and availability of information hosted in the cloud environment. In this chapter, the authors discuss the issue of identity and access management in cloud computing, analyzing the work carried out by others in the area. Also, various issues in the current IAM scenario in cloud computing, such as authentication, authorization, access control models, identity life cycle management, cloud identity-as-a-service, federated identity management and also, the identity and access management in the inter-cloud environment are discussed. The authors conclude this chapter discussing a few research issues in the area of identity and access management in the cloud and inter-cloud environments.


Author(s):  
Attila Csaba Marosi ◽  
Péter Kacsuk

Cloud Computing (CC) offers simple and cost effective outsourcing in dynamic service environments and allows the construction of service-based applications extensible with the latest achievements of diverse research areas. CC is built using dedicated and reliable resources and provides uniform seemingly unlimited capacities. Volunteer Computing (VC) on the other hand uses volatile, heterogeneous and unreliable resources. This chapter per the authors makes an attempt starting from a definition for Cloud Computing to identify the required steps and formulate a definition for what can be considered as the next evolutionary stage for Volunteer Computing: Volunteer Clouds (VCl). There are many idiosyncrasies of VC to overcome (e.g., volatility, heterogeneity, reliability, responsiveness, scalability, etc.). Heterogeneity exists in VC at different levels. The vision of CC promises to provide a homogeneous environment. The goal of this chapter per the authors is to identify methods and propose solutions that tackle the heterogeneities and thus, make a step towards Volunteer Clouds.


Author(s):  
Ioan Petri ◽  
Javier Diaz-Montes ◽  
Mengsong Zou ◽  
Ali Reza Zamani ◽  
Thomas H Beach ◽  
...  

Cloud computing has emerged as attractive platform for computing data intensive applications. However, efficient computation of this kind of workloads requires understanding how to store, process, and analyse large volumes of data in a timely manner. Many “smart cities” applications, for instance, identify how data from building sensors can be combined together to support applications such as emergency response, energy management, etc. Enabling sensor data to be transmitted to a cloud environment for processing provides a number of benefits, such as scalability and on-demand provisioning of computational resources. In this chapter, we propose the use of a multi-layer cloud infrastructure that distributes processing over sensing nodes, multiple intermediate/gateways nodes, and large data centres. Our solution aims at utilising the pervasive computational capabilities located at the edge of the infrastructure and along the data path to reduce data movement to large data centres located “deep” into the infrastructure and perform a more efficient use of computing and network resources.


Author(s):  
Chetan Jaiswal ◽  
Vijay Kumar

Legacy database systems manage transactions under a concurrency control and a recovery protocol. The underlying operating system creates transaction execution platform and the database executes transactions concurrently. When the database system fails then the recovery manager applies “Undo” and/or “Redo” operations (depending upon the recovery protocol) to achieve the consistent state of the database. The recovery manager performs these set of operations as required by transaction execution platform. The availability of “Virtual” machines on cloud has given us an architecture that makes it possible to eliminate the effect of system or transaction failure by always taking the database to the next consistent state. We present a novel scheme of eliminating the effect of such failure by applying transaction “roll-forward” which resumes its execution from the point of failure. We refer to our system as AAP (Always Ahead Processing). Our work enables cloud providers to offer transactional HA-DBMS as an option that too with multiple data sources not necessarily relational.


Author(s):  
Marcio R. M. Assis ◽  
Luiz Fernando Bittencourt ◽  
Rafael Tolosana-Calasanz ◽  
Craig A. Lee

With the maturation of the Cloud Computing, the eyes of the scientific community and specialized commercial institutions have turned to research related to the use of multiple clouds. The main reason for this interest is the limitations that many cloud providers individually face to meet all the inherent characteristics of this paradigm. Therefore, using multiple cloud organizations opens the opportunity for the providers to consume resources with more attractive prices, increase the resilience as well as to monetize their own idle resources. When considering customers, problems as interruption of services, lack of interoperability that lead to lock-in and loss of quality of services due to locality are presented as limiting to the adoption of Cloud Computing. This chapter presents an introduction to conceptual characterization of Cloud Federation. Moreover, it presents the challenges in implementing federation architectures, requirements for the development of this type of organization and the relevant architecture proposals.


Author(s):  
Javier Prades ◽  
Fernando Campos ◽  
Carlos Reaño ◽  
Federico Silla

Current data centers leverage virtual machines (VMs) in order to efficiently use hardware resources. VMs allow reducing equipment acquisition costs as well as decreasing overall energy consumption. However, although VMs have noticeably evolved to make a smart use of the underlying hardware, the use of GPUs (Graphics Processing Units) for General Purpose computing (GPGPU) is still not efficiently supported. This concern might be addressed by remote GPU virtualization solutions, which may provide VMs with GPUs located in a remote node, detached from the host where the VMs are being executed. This chapter presents an in-depth analysis about how to provide GPU access to applications running inside VMs. This analysis is complemented with experimental results which show that the use of remote GPU virtualization is an effective mechanism to provide GPU access to applications with negligible overheads. Finally, the approach is presented in the context of cloud federations for providing GPGPU as a Service.


Author(s):  
José Luis Vivas ◽  
Francisco Vilar Brasileiro ◽  
Abmar Barros ◽  
Giovanni Farias da Silva ◽  
Marcos Nóbrega Jr ◽  
...  

Many e-science initiatives are currently investigating the use of cloud computing to support all kinds of scientific activities. The objective of this chapter is to describe the architecture and the deployment of the EUBrazilCC federated e-infrastructure, a Research & Development project that aims at providing a user-centric test bench enabling European and Brazilian research communities to test the deployment and execution of scientific applications on a federated intercontinental e-infrastructure. This e-infrastructure exploits existing resources that consist of virtualized data centers, supercomputers, and even opportunistically exploited desktops spread over a transatlantic geographic area. These heterogeneous resources are federated with the aid of appropriate middleware that provide the necessary features to achieve the established challenging goals. In order to elicit the requirements and validate the resulting infrastructure, three complex scientific applications have been implemented, which are also presented here.


Sign in / Sign up

Export Citation Format

Share Document