Towards Efficient Bounds on Completion Time and Resource Provisioning for Scheduling Workflows on Heterogeneous Processing Systems

Author(s):  
D. Sirisha ◽  
G. Vijayakumari

Compute intensive applications featured as workflows necessitate Heterogeneous Processing Systems (HPS) for attaining high performance to minimize the turnaround time. Efficient scheduling of the workflow tasks is paramount to attain higher potentials of HPS and is a challenging NP-Complete problem. In the present work, Branch and Bound (BnB) strategy is applied to optimally schedule the workflow tasks. The proposed bounds are tighter, simpler and less complex than the existing bounds and the upper bound is closer to the exact solution. Moreover, the bounds on the resource provisioning are devised to execute the workflows in the minimum possible time and optimally utilize the resources. The performance of the proposed BnB strategy is evaluated on a suite of benchmark workflows. The experimental results reveal that the proposed BnB strategy improved the optimal solutions compared to the existing heuristic scheduling algorithms for more than 20 percent of the cases and generated better schedules over 7 percent for 82.6 percent of the cases.

2008 ◽  
Vol 17 (03) ◽  
pp. 349-371 ◽  
Author(s):  
TAO HUANG ◽  
LEI LI ◽  
JUN WEI

With the increasing number of Web Services with similar or identical functionality, the non-functional properties of a Web Service will become more and more important. Hence, a choice needs to be made to determine which services are to participate in a given composite service. In general, multi-QoS constrained Web Services composition, with or without optimization, is a NP-complete problem on computational complexity that cannot be exactly solved in polynomial time. A lot of heuristics and approximation algorithms with polynomial- and pseudo-polynomial-time complexities have been designed to deal with this problem. However, these approaches suffer from excessive computational complexities that cannot be used for service composition in runtime. In this paper, we propose a efficient approach for multi-QoS constrained Web Services selection. Firstly, a user preference model was proposed to collect the user's preference. And then, a correlation model of candidate services are established in order to reduce the search space. Based on these two model, a heuristic algorithm is then proposed to find a feasible solution for multi-QoS constrained Web Services selection with high performance and high precision. The experimental results show that the proposed approach can achieve the expecting goal.


2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


2019 ◽  
Vol 8 (4) ◽  
pp. 10906-10909

In grid, scheduling algorithms play vital role of mapping a set of tasks to the available heterogeneous resources. Extant literatures have shown that the task mapping problem is an NP-Complete problem. Heuristic scheduling algorithm aims to obtain the minimum overall execution time of the set of tasks. In this paper, we address the problem of scheduling a set of n tasks with a set of m resources, such that the makespan is minimized. The proposed task scheduling algorithm (RTS) is based on the well-known optimization algorithm, called Hungarian algorithm. (RTS) algorithm considers an equal number of tasks and resources, and maps the tasks to the resources and makes an effective scheduling decision. We simulate the (RTS) algorithm and compare it with the Min-min heuristic scheduling algorithm. The performance evaluation shows that (RTS) produces minimized makespan and better resource utilization in comparison to existing Min-min.


2011 ◽  
Vol 39 (3) ◽  
pp. 193-209 ◽  
Author(s):  
H. Surendranath ◽  
M. Dunbar

Abstract Over the last few decades, finite element analysis has become an integral part of the overall tire design process. Engineers need to perform a number of different simulations to evaluate new designs and study the effect of proposed design changes. However, tires pose formidable simulation challenges due to the presence of highly nonlinear rubber compounds, embedded reinforcements, complex tread geometries, rolling contact, and large deformations. Accurate simulation requires careful consideration of these factors, resulting in the extensive turnaround time, often times prolonging the design cycle. Therefore, it is extremely critical to explore means to reduce the turnaround time while producing reliable results. Compute clusters have recently become a cost effective means to perform high performance computing (HPC). Distributed memory parallel solvers designed to take advantage of compute clusters have become increasingly popular. In this paper, we examine the use of HPC for various tire simulations and demonstrate how it can significantly reduce simulation turnaround time. Abaqus/Standard is used for routine tire simulations like footprint and steady state rolling. Abaqus/Explicit is used for transient rolling and hydroplaning simulations. The run times and scaling data corresponding to models of various sizes and complexity are presented.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Tony Böhle ◽  
Ulrike Georgi ◽  
Dewi Fôn Hughes ◽  
Oliver Hauser ◽  
Gudrun Stamminger ◽  
...  

AbstractObjectivesFor a long time, the therapeutic drug monitoring of anti-infectives (ATDM) was recommended only to avoid the toxic side effects of overdosing. During the last decade, however, this attitude has undergone a significant change. Insufficient antibiotic therapy may promote the occurrence of drug resistance; therefore, the “one-dose-fits-all” principle can no longer be classified as up to date. Patients in intensive care units (ICU), in particular, can benefit from individualized antibiotic therapies.MethodsPresented here is a rapid and sufficient LC-MS/MS based assay for the analysis of eight antibiotics (ampicillin, cefepime, cefotaxime, ceftazidime, cefuroxime, linezolid, meropenem, and piperacillin) applicated by continuous infusion and voriconazole. In addition a dose adjustment procedure for individualized antibiotic therapy has been established.ResultsThe suggested dose adjustments following the initial dosing of 121 patient samples from ICUs, were evaluated over a period of three months. Only a minor percentage of the serum levels were found to be within the target range while overdosing was often observed for β-lactam antibiotics, and linezolid tended to be often underused. The results demonstrate an appreciable potential for β-lactam savings while enabling optimal therapy.ConclusionsThe presented monitoring method provides high specificity and is very robust against various interferences. A fast and straightforward method, the developed routine ensures rapid turnaround time. Its application has been well received by participating ICUs and has led to an expanding number of hospital wards participating in ATDM.


Author(s):  
Chin-Chia Wu ◽  
Ameni Azzouz ◽  
Jia-Yang Chen ◽  
Jianyou Xu ◽  
Wei-Lun Shen ◽  
...  

AbstractThis paper studies a single-machine multitasking scheduling problem together with two-agent consideration. The objective is to look for an optimal schedule to minimize the total tardiness of one agent subject to the total completion time of another agent has an upper bound. For this problem, a branch-and-bound method equipped with several dominant properties and a lower bound is exploited to search optimal solutions for small size jobs. Three metaheuristics, cloud simulated annealing algorithm, genetic algorithm, and simulated annealing algorithm, each with three improvement ways, are proposed to find the near-optimal solutions for large size jobs. The computational studies, experiments, are provided to evaluate the capabilities for the proposed algorithms. Finally, statistical analysis methods are applied to compare the performances of these algorithms.


2001 ◽  
Vol 34 (44) ◽  
pp. 9555-9567 ◽  
Author(s):  
Tomohiro Sasamoto ◽  
Taro Toyoizumi ◽  
Hidetoshi Nishimori

Sign in / Sign up

Export Citation Format

Share Document