scholarly journals An Efficient Cloud Storage Model for Heterogeneous Cloud Infrastructures

2011 ◽  
Vol 23 ◽  
pp. 510-515 ◽  
Author(s):  
Dejun Wang
Author(s):  
Esteban Lopez-Falcon ◽  
Andrei Tchernykh ◽  
Nikolay Chervyakov ◽  
Mikhail Babenko ◽  
Elena Nepretimova ◽  
...  
Keyword(s):  

2018 ◽  
Vol 15 (4) ◽  
pp. 82-96 ◽  
Author(s):  
Lei Wu ◽  
Yuandou Wang

Cloud computing, with dependable, consistent, pervasive, and inexpensive access to geographically distributed computational capabilities, is becoming an increasingly popular platform for the execution of scientific applications such as scientific workflows. Scheduling multiple workflows over cloud infrastructures and resources is well recognized to be NP-hard and thus critical to meeting various types of Quality-of-Service (QoS) requirements. In this work, the authors consider a multi-objective scientific workflow scheduling framework based on the dynamic game-theoretic model. It aims at reducing make-spans, cloud cost, while maximizing system fairness in terms of workload distribution among heterogeneous cloud virtual machines (VMs). The authors consider randomly-generated scientific workflow templates as test cases and carry out extensive real-world tests based on third-party commercial clouds. Experimental results show that their proposed framework outperforms traditional ones by achieving lower make-spans, lower cost, and better system fairness.


Author(s):  
Jessica Vandebon ◽  
Jose G. F. Coutinho ◽  
Wayne Luk

AbstractThis paper presents a Function-as-a-Service (FaaS) approach for deploying managed cloud functions onto heterogeneous cloud infrastructures. Current FaaS systems, such as AWS Lambda, allow domain-specific functionality, such as AI, HPC and image processing, to be deployed in the cloud while abstracting users from infrastructure and platform concerns. Existing approaches, however, use a single type of resource configuration to execute all function requests. In this paper, we present a novel FaaS approach that allows cloud functions to be effectively executed across heterogeneous compute resources, including hardware accelerators such as GPUs and FPGAs. We implement heterogeneous scheduling to tailor resource selection to each request, taking into account performance and cost concerns. In this way, our approach makes use of different processor types and quantities (e.g. 2 CPU cores), uniquely suited to handle different types of workload, potentially providing improved performance at a reduced cost. We validate our approach in three application domains: machine learning, bio-informatics, and physics, and target a hardware platform with a combined computational capacity of 24 FPGAs and 12 CPU cores. Compared to traditional FaaS, our approach achieves a cost improvement for non-uniform traffic of up to 8.9 times, while maintaining performance objectives.


Sign in / Sign up

Export Citation Format

Share Document