Data Placement Oriented Scheduling Algorithm for Scheduling Scientific Workflow in Cloud: A Budget-Aware Approach

2020 ◽  
Vol 13 (5) ◽  
pp. 871-883
Author(s):  
Avinash Kaur ◽  
Pooja Gupta ◽  
Parminder Singh ◽  
Manpreet Singh

Background: A large number of communities and enterprises deploy numerous scientific workflow applications on cloud service. Aims: The main aim of the cloud service provider is to execute the workflows with a minimal budget and makespan. Most of the existing techniques for budget and makespan are employed for the traditional platform of computing and are not applicable to cloud computing platforms with unique resource management methods and pricing strategies based on service. Methods: In this paper, we studied the joint optimization of cost and makespan of scheduling workflows in IaaS clouds, and proposed a novel workflow scheduling scheme. Also, data placement is included in the proposed algorithm. Results: In this scheme, DPO-HEFT (Data Placement Oriented HEFT) algorithm is developed which closely integrates the data placement mechanism with the list scheduling heuristic HEFT. Extensive experiments using the real-world and synthetic workflow demonstrate the efficacy of our scheme. Conclusion: Our scheme can achieve significantly better cost and makespan trade-off fronts with remarkably higher hypervolume and can run up to hundreds times faster than the state-of-the-art algorithms.

2019 ◽  
Vol 28 (06) ◽  
pp. 1930006 ◽  
Author(s):  
Pingping Lu ◽  
Gongxuan Zhang ◽  
Zhaomeng Zhu ◽  
Xiumin Zhou ◽  
Jin Sun ◽  
...  

Scientific workflow is a common model to organize large scientific computations. It borrows the concept of workflow in business activities to manage the complicated processes in scientific computing automatically or semi-automatically. The workflow scheduling, which maps tasks in workflows to parallel computing resources, has been extensively studied over years. In recent years, with the rise of cloud computing as a new large-scale distributed computing model, it is of great significance to study workflow scheduling problem in the cloud. Compared with traditional distributed computing platforms, cloud platforms have unique characteristics such as the self-service resource management model and the pay-as-you-go billing model. Therefore, the workflow scheduling in cloud needs to be reconsidered. When scheduling workflows in clouds, the monetary cost and the makespan of the workflow executions are concerned with both the cloud service providers (CSPs) and the customers. In this paper, we study a series of cost-and-time-aware workflow scheduling algorithms in cloud environments, which aims to provide researchers with a choice of appropriate cloud workflow scheduling approaches in various scenarios. We conducted a broad review of different cloud workflow scheduling algorithms and categorized them based on their optimization objectives and constraints. Also, we discuss the possible future research direction of the clouds workflow scheduling.


Author(s):  
Chinmai Shetty, Dr. Sarojadevi H, Suraj Prabhu

The flexibility provided by the cloud service provider at reduced cost popularized the cloud tremendously. The cloud service provider must schedule the incoming requests dynamically. In a cloud environment tasks must be scheduled such that proper resource utilization is achieved. Hence task scheduling plays a significant role in the functionality and performance of cloud computing systems. While there exist many approaches for boosting the task scheduling in the cloud, it is still an unresolved issue. In this proposed framework we attempt to optimize the usage of cloud computing resources by applying machine learning techniques. The new proposed framework dynamically selects the scheduling algorithm for the incoming request rather than arbitrary assigning a task to the scheduling algorithm. The scheduling algorithm is predicted dynamically using a neural network which is the best for the incoming request. The proposed framework considers scheduling parameters namely cost, throughput, makespan and degree of imbalance. The algorithms chosen for scheduling are 1) MET 2) MCT 3) Sufferage 4)Min-min 5) Min-mean 6) Min-var. The framework includes 4 neural networks to predict the best algorithm for each scheduling parameters considered for optimization. PCA algorithm is used for extracting relevant features from the input data set. The proposed framework shows the scope for the overall system performance by dynamically selecting precise scheduling algorithms for each incoming request from the user. 


2021 ◽  
Vol 9 (2) ◽  
pp. 913-928
Author(s):  
Yadaiah Balagoni, Et. al.

Cloud services are offered to consumers based on Service Level Agreements (SLAs) signed between Cloud Service Provider (CSP) and consumer. Due to on-demand provisioning of resources there is exponential growth of cloud consumers. Job scheduling is one of the areas that has attracted researchers to improve performance of cloud management system. Along with the on premise infrastructure, Small and Medium Enterprises (SMEs) also depend on public cloud infrastructure (leading to hybrid cloud) for seamless continuity of their businesses. In this context, ensuring SLAs and effective management of hybrid cloud resources are major challenging issues to be considered. Hence, there is a need for an effective scheduling algorithm which considers multiple objective functions like SLA (deadline), cost and energy while making scheduling decisions. Most of the state of the art schedulers in hybrid cloud environment considered single objective function. However, in real world, it is inadequate for scheduling effectiveness. To overcome this problem, we proposed an integrated framework which ensures SLAs (deadline), cost effectiveness and energy efficiency with an underlying scheduling algorithm known as SCE-TS. This algorithm is evaluated with different workloads and SLAs using a cloud platform. The empirical study revealed that the proposed framework improves scheduling efficiency in terms of meeting SLAs, cost and energy efficiency. It is evaluated and compared with the state of the art and found to be effective in making scheduling decisions in cloud environment.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 563
Author(s):  
Babu Rajendiran ◽  
Jayashree Kanniappan

Nowadays, many business organizations are operating on the cloud environment in order to diminish their operating costs and to select the best service from many cloud providers. The increasing number of Cloud Services available on the market encourages the cloud consumer to be conscious in selecting the most apt Cloud Service Provider that satisfies functionality, as well as QoS parameters. Many disciplines of computer-based applications use standardized ontology to represent information in their fields that indicate the necessity of an ontology-based representation. The proposed generic model can help service consumers to identify QoS parameters interrelations in the cloud services selection ontology during run-time, and for service providers to enhance their business by interpreting the various relations. The ontology has been developed using the intended attributes of QoS from various service providers. A generic model has been developed and it is tested with the developed ontology.


2022 ◽  
Vol 15 (2) ◽  
pp. 1-27
Author(s):  
Andrea Damiani ◽  
Giorgia Fiscaletti ◽  
Marco Bacis ◽  
Rolando Brondolin ◽  
Marco D. Santambrogio

“Cloud-native” is the umbrella adjective describing the standard approach for developing applications that exploit cloud infrastructures’ scalability and elasticity at their best. As the application complexity and user-bases grow, designing for performance becomes a first-class engineering concern. As an answer to these needs, heterogeneous computing platforms gained widespread attention as powerful tools to continue meeting SLAs for compute-intensive cloud-native workloads. We propose BlastFunction, an FPGA-as-a-Service full-stack framework to ease FPGAs’ adoption for cloud-native workloads, integrating with the vast spectrum of fundamental cloud models. At the IaaS level, BlastFunction time-shares FPGA-based accelerators to provide multi-tenant access to accelerated resources without any code rewriting. At the PaaS level, BlastFunction accelerates functionalities leveraging the serverless model and scales functions proactively, depending on the workload’s performance. Further lowering the FPGAs’ adoption barrier, an accelerators’ registry hosts accelerated functions ready to be used within cloud-native applications, bringing the simplicity of a SaaS-like approach to the developers. After an extensive experimental campaign against state-of-the-art cloud scenarios, we show how BlastFunction leads to higher performance metrics (utilization and throughput) against native execution, with minimal latency and overhead differences. Moreover, the scaling scheme we propose outperforms the main serverless autoscaling algorithms in workload performance and scaling operation amount.


Author(s):  
Ravi Mahadevan ◽  
Neelamegam Anbazhagan

<span>Online Nowadays, the enterprises &amp; individuals are contributing their workloads on cloud service providers which are going to increase on daily basis. There are   large amount CSP are available to offer virtualized and dynamic resource on pay and use basis. However, there are almost CSP failed to maintain quality of service (QOS) and minimal resource optimization. Some of the existing approaches are highly dedicated on scheduling policy but, it does not considered reliable services with optimized QOS. To offer best solution of above problem, the framework proposes Enhanced Minimal Resource Optimization based Scheduling Algorithm to minimize the resources and maintain the QOS.  The method avoids delay in Request-Response model in cloud environment. To avoid overload for resource allocation, the proposed design utilized optimized scheduling policy.  Proposed mechanisms utilized optimized service brokering policy to reduce the delay response in cloud environment. The framework also help cloud user to prefer best CSP according to their prior services. The method offers rising trend of resource based structure to reduce the placement churn extensively. Proposed system utilized efficient scheduling policy to transmit data request to CSP with minimal data processing time. The entire utilization is to improve the QOS of cloud service provider in the features of multi-dimensional resource. Based on experimental evaluations, proposed technique improves the CPT (Computation Processing Time) 301.72 milliseconds, BU (Bandwidth Utilization) 20 Mbps, CPUU (CPU Utilization) 5% &amp; MRU (Memory Resource Utilization) 3% on given input parameters compare than existing methodology.</span>


Cloud Computing is well known today on account of enormous measure of data storage and quick access of information over the system. It gives an individual client boundless extra space, accessibility and openness of information whenever at anyplace. Cloud service provider can boost information storage by incorporating data deduplication into cloud storage, despite the fact that information deduplication removes excess information and reproduced information happens in cloud environment. This paper presents a literature survey alongside different deduplication procedures that have been based on cloud information storage. To all the more likely guarantee secure deduplication in cloud, this paper examines file level data deduplication and block level data deduplication.


Sign in / Sign up

Export Citation Format

Share Document