cloud infrastructures
Recently Published Documents


TOTAL DOCUMENTS

327
(FIVE YEARS 100)

H-INDEX

21
(FIVE YEARS 3)

2022 ◽  
Vol 25 (1) ◽  
pp. 1-36
Author(s):  
Savvas Savvides ◽  
Seema Kumar ◽  
Julian James Stephen ◽  
Patrick Eugster

With the advent of the Internet of things (IoT), billions of devices are expected to continuously collect and process sensitive data (e.g., location, personal health factors). Due to the limited computational capacity available on IoT devices, the current de facto model for building IoT applications is to send the gathered data to the cloud for computation. While building private cloud infrastructures for handling large amounts of data streams can be expensive, using low-cost public (untrusted) cloud infrastructures for processing continuous queries including sensitive data leads to strong concerns over data confidentiality. This article presents C3PO, a confidentiality-preserving, continuous query processing engine, that leverages the public cloud. The key idea is to intelligently utilize partially homomorphic and property-preserving encryption to perform as many computationally intensive operations as possible—without revealing plaintext—in the untrusted cloud. C3PO provides simple abstractions to the developer to hide the complexities of applying complex cryptographic primitives, reasoning about the performance of such primitives, deciding which computations can be executed in an untrusted tier, and optimizing cloud resource usage. An empirical evaluation with several benchmarks and case studies shows the feasibility of our approach. We consider different classes of IoT devices that differ in their computational and memory resources (from a Raspberry Pi 3 to a very small device with a Cortex-M3 microprocessor) and through the use of optimizations, we demonstrate the feasibility of using partially homomorphic and property-preserving encryption on IoT devices.


2022 ◽  
Vol 22 (2) ◽  
pp. 1-21
Author(s):  
Hongyang Yan ◽  
Nan Jiang ◽  
Kang Li ◽  
Yilei Wang ◽  
Guoyu Yang

At present, clients can outsource lots of complex and abundant computation, e.g., Internet of things (IoT), tasks to clouds by the “pay as you go” model. Outsourcing computation can save costs for clients and fully utilize the existing cloud infrastructures. However, it is hard for clients to trust the clouds even if blockchain is used as the trusted platform. In this article, we utilize the verification method as SETI@home by only two rational clouds, who hope to maximize their utilities. Utilities are defined as the incomes of clouds when they provide computation results to clients. More specifically, one client outsources two jobs to two clouds and each job contains n tasks, which include k identical sentinels. Two clouds can either honestly compute each task or collude on the identical sentinel tasks by agreeing on random values. If the results of identical sentinels are identical, then client regards the jobs as correctly computed without verification. Obviously, rational clouds have incentives to deviate by collusion and provide identical random results for a higher income. We discuss how to prevent collusion by using deposits, e.g., bit-coins. Furthermore, utilities for each cloud can be automatically assigned by a smart contract. We prove that, given proper parameters, two rational clouds will honestly send correct results to the client without collusion.


2022 ◽  
Vol 15 (2) ◽  
pp. 1-27
Author(s):  
Andrea Damiani ◽  
Giorgia Fiscaletti ◽  
Marco Bacis ◽  
Rolando Brondolin ◽  
Marco D. Santambrogio

“Cloud-native” is the umbrella adjective describing the standard approach for developing applications that exploit cloud infrastructures’ scalability and elasticity at their best. As the application complexity and user-bases grow, designing for performance becomes a first-class engineering concern. As an answer to these needs, heterogeneous computing platforms gained widespread attention as powerful tools to continue meeting SLAs for compute-intensive cloud-native workloads. We propose BlastFunction, an FPGA-as-a-Service full-stack framework to ease FPGAs’ adoption for cloud-native workloads, integrating with the vast spectrum of fundamental cloud models. At the IaaS level, BlastFunction time-shares FPGA-based accelerators to provide multi-tenant access to accelerated resources without any code rewriting. At the PaaS level, BlastFunction accelerates functionalities leveraging the serverless model and scales functions proactively, depending on the workload’s performance. Further lowering the FPGAs’ adoption barrier, an accelerators’ registry hosts accelerated functions ready to be used within cloud-native applications, bringing the simplicity of a SaaS-like approach to the developers. After an extensive experimental campaign against state-of-the-art cloud scenarios, we show how BlastFunction leads to higher performance metrics (utilization and throughput) against native execution, with minimal latency and overhead differences. Moreover, the scaling scheme we propose outperforms the main serverless autoscaling algorithms in workload performance and scaling operation amount.


2022 ◽  
Author(s):  
Zijun Li ◽  
Linsong Guo ◽  
Jiagan Cheng ◽  
Quan Chen ◽  
BingSheng He ◽  
...  

The development of cloud infrastructures inspires the emergence of cloud-native computing. As the most promising architecture for deploying microservices, serverless computing has recently attracted more and more attention in both industry and academia. Due to its inherent scalability and flexibility, serverless computing becomes attractive and more pervasive for ever-growing Internet services. Despite the momentum in the cloud-native community, the existing challenges and compromises still wait for more advanced research and solutions to further explore the potentials of the serverless computing model. As a contribution to this knowledge, this article surveys and elaborates the research domains in the serverless context by decoupling the architecture into four stack layers: Virtualization, Encapsule, System Orchestration, and System Coordination. Inspired by the security model, we highlight the key implications and limitations of these works in each layer, and make suggestions for potential challenges to the field of future serverless computing.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

Synergistic confluence of pervasive sensing, computing, and networking is generating heterogeneous data at unprecedented scale and complexity. Cloud computing has emergered in the last two decades as a unique storage and computing resource to support a diverse assortment of applications. Numerous organizations are migrating to the cloud to store and process their information. When the cloud infrastructures and resources are insufficient to satisfy end-users requests, scheduling mechanisms are required. Task scheduling, especially in a distributed and heterogeneous system is an NP-hard problem since various task parameters must be considered for an appropriate scheduling. In this paper we propose a hybrid PSO and extremal optimization-based approach to resolve task scheduling in the cloud. The algorithm optimizes makespan which is an important criterion to schedule a number of tasks on different Virtual Machines. Experiments on synthetic and real-life workloads show the capability of the method to successfully schedule task and outperforms many known methods of the state of the art.


Author(s):  
Kapil Tarey

Abstract: Cloud computing refers to a computer environment in which traditional software systems, installations, and licensing concerns are replaced with comprehensive on demand," pay as you need" internet based services. In this scenario, many cloud customers can request multiple cloud resources at the same time. As a result, there should be a plan in place to ensure that resources must be prepared for the needy customer in proficient way in order complete their needs. In cloud computing systems, resource management is a critical and difficult issue. It must meet numerous service quality requirements and, as a result, reduce SLA violations. This paper survey different resource management technique for cloud infrastructures. Keywords: Cloud, Resource management and techniques


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3108
Author(s):  
Bence Ligetfalvi ◽  
Márk Emődi ◽  
József Kovács ◽  
Róbert Lovas

In Infrastructure-as-a-Service (IaaS) clouds, the development process of a ready-to-use and reliable infrastructure might be a complex task due to the interconnected and dependent services that are deployed (and operated later on) in a concurrent way on virtual machines. Different timing conditions may change the overall initialisation method, which can lead to abnormal behaviour or failure in the non-deterministic environment. The overall motivation of our research is to improve the reliability of cloud-based infrastructures with minimal user interactions and significantly accelerate the time-consuming debugging process. This paper focuses on the behaviour of cloud-based infrastructures during their deployment phase and introduces the adaption of a replay, and active control enriched debugging technique, called macrostep, in the field of cloud orchestration in order to provide support for developers troubleshooting deployment-related errors. The fundamental macrostep mechanisms, including the generation of collective breakpoint sets as well as the traversal method for such consistent global states, have been combined with the Occopus cloud orchestrator and the Neo4J graph database. The paper describes the novel approach, the design choices as well as the implementation of the experimental debugger tool with a use case for validation purposes by providing some preliminary numerical results.


Author(s):  
Adrián Bernal ◽  
M. Emilia Cambronero ◽  
Alberto Núñez ◽  
Pablo C. Cañizares ◽  
Valentín Valero

AbstractIn this paper, we investigate how to improve the profits in cloud infrastructures by using price schemes and analyzing the user interactions with the cloud provider. For this purpose, we consider two different types of client behavior, namely regular and high-priority users. Regular users do not require a continuous service, and they can wait to be attended to. In contrast, high-priority users require a continuous service, e.g., a 24/7 service, and usually need an immediate answer to any request. A complete framework has been implemented, which includes a UML profile that allows us to define specific cloud scenarios and the automatic transformations to produce the code for the cloud simulations in the Simcan2Cloud simulator. The engine of Simcan2Cloud has also been modified by adding specific SLAs and price schemes. Finally, we present a thorough experimental study to analyze the performance results obtained from the simulations, thus making it possible to draw conclusions about how to improve the cloud profit for the cloud studied by adjusting the different parameters and resource configuration.


2021 ◽  
Author(s):  
Chunming Rong ◽  
Jiahui Geng ◽  
Thomas J. Hacker ◽  
Haakon Bryhni ◽  
Martin G. Jaatun

Abstract Modern information systems are built fron a complex composition of networks, infrastructure, devices, services, and applications, interconnected by data flows that are often private and financially sensitive. The 5G networks, which can create hyperlocalized services, have highlighted many of the deficiencies of current practices in use today to create and operate information systems. Emerging cloud computing techniques, such as Infrastructure-as-Code (IaC) and elastic computing, o↵er a path for a future re-imagining of how we create, deploy, secure, operate, and retire information systems. In this paper, we articulate the position that a comprehensive new approach is needed for all OSI layers from layer 2 up to applications that are built on underlying principles that include reproducibility, continuous integration/continuous delivery, auditability, and versioning. There are obvious needs to redesign and optimize the protocols from the network layer to the application layer. Our vision seeks to augment existing Cloud Computing and Networking solutions with support for multiple cloud infrastructures and seamless integration of cloud-based microservices. To address these issues, we propose an approach named Open Infrastructure as Code (OpenIaC), which is an attempt to provide a common open forum to integrate and build on advances in cloud computing and blockchain to address the needs of modern information architectures. The main mission of our OpenIaC approach is to provide services based on the principles of Zero Trust Architecture (ZTA) among the federation of connected resources based on Decentralized Identity (DID). Our objectives include the creation of an open-source hub with fine-grained access control for an open and connected infrastructure of shared resources (sensing, storage, computing, 3D printing, etc.) managed by blockchains and federations. Our proposed approach has the potential to provide a path for developing new platforms, business models, and a modernized information ecosystem necessary for 5G networks.


Author(s):  
Jessica Vandebon ◽  
Jose G. F. Coutinho ◽  
Wayne Luk

AbstractThis paper presents a Function-as-a-Service (FaaS) approach for deploying managed cloud functions onto heterogeneous cloud infrastructures. Current FaaS systems, such as AWS Lambda, allow domain-specific functionality, such as AI, HPC and image processing, to be deployed in the cloud while abstracting users from infrastructure and platform concerns. Existing approaches, however, use a single type of resource configuration to execute all function requests. In this paper, we present a novel FaaS approach that allows cloud functions to be effectively executed across heterogeneous compute resources, including hardware accelerators such as GPUs and FPGAs. We implement heterogeneous scheduling to tailor resource selection to each request, taking into account performance and cost concerns. In this way, our approach makes use of different processor types and quantities (e.g. 2 CPU cores), uniquely suited to handle different types of workload, potentially providing improved performance at a reduced cost. We validate our approach in three application domains: machine learning, bio-informatics, and physics, and target a hardware platform with a combined computational capacity of 24 FPGAs and 12 CPU cores. Compared to traditional FaaS, our approach achieves a cost improvement for non-uniform traffic of up to 8.9 times, while maintaining performance objectives.


Sign in / Sign up

Export Citation Format

Share Document