Starting workflow tasks before they're ready

Author(s):  
Wladislaw Gusew ◽  
Bjorn Scheuermann
Keyword(s):  
2003 ◽  
Vol 12 (04) ◽  
pp. 455-485 ◽  
Author(s):  
Jacques Wainer ◽  
Paulo Barthelmess ◽  
Akhil Kumar

This paper presents a pair of role-based access control models for workflow systems, collectively known as the W-RBAC models. The first of these models, W0-RBAC is based on a framework that couples a powerful RBAC-based permission service and a workflow component with clear separation of concerns for ease of administration of authorizations. The permission service is the focus of the work, providing an expressive logic-based language for the selection of users authorized to perform workflow tasks, with preference ranking. W1-RBAC extends the basic model by incorporating exception handling capabilities through controlled and systematic overriding of constraints.


Author(s):  
D. Sirisha ◽  
G. Vijayakumari

Compute intensive applications featured as workflows necessitate Heterogeneous Processing Systems (HPS) for attaining high performance to minimize the turnaround time. Efficient scheduling of the workflow tasks is paramount to attain higher potentials of HPS and is a challenging NP-Complete problem. In the present work, Branch and Bound (BnB) strategy is applied to optimally schedule the workflow tasks. The proposed bounds are tighter, simpler and less complex than the existing bounds and the upper bound is closer to the exact solution. Moreover, the bounds on the resource provisioning are devised to execute the workflows in the minimum possible time and optimally utilize the resources. The performance of the proposed BnB strategy is evaluated on a suite of benchmark workflows. The experimental results reveal that the proposed BnB strategy improved the optimal solutions compared to the existing heuristic scheduling algorithms for more than 20 percent of the cases and generated better schedules over 7 percent for 82.6 percent of the cases.


2020 ◽  
Vol 29 (16) ◽  
pp. 2050255
Author(s):  
Heng Li ◽  
Yaoqin Zhu ◽  
Meng Zhou ◽  
Yun Dong

In mobile cloud computing, the computing resources of mobile devices can be integrated to execute complicated applications, in order to tackle the problem of insufficient resources of mobile devices. Such applications are, in general, characterized as workflows. Scheduling workflow tasks on a mobile cloud system consisting of heterogeneous mobile devices is a NP-hard problem. In this paper, intelligent algorithms, e.g., particle swarm optimization (PSO) and simulated annealing (SA), are widely used to solve this problem. However, both PSO and SA suffer from the limitation of easily being trapped into local optima. Since these methods rely on their evolutionary mechanisms to explore new solutions in solution space, the search procedure converges once getting stuck in local optima. To address this limitation, in this paper, we propose two effective metaheuristic algorithms that incorporate the iterated local search (ILS) strategy into PSO and SA algorithms, respectively. In case that the intelligent algorithm converges to a local optimum, the proposed algorithms use a perturbation operator to explore new solutions and use the newly explored solutions to start a new round of evolution in the solution space. This procedure is iterated until no better solutions can be explored. Experimental results show that by incorporating the ILS strategy, our proposed algorithms outperform PSO and SA in reducing workflow makespans. In addition, the perturbation operator is beneficial for improving the effectiveness of scheduling algorithms in exploring high-quality scheduling solutions.


2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


Author(s):  
F. Yu ◽  
H. Chen ◽  
K. Tu ◽  
Q. Wen ◽  
J. He ◽  
...  

Facing the monitoring needs of emergency responses to major disasters, combining the disaster information acquired at the first time after the disaster and the dynamic simulation result of the disaster chain evolution process, the overall plan for coordinated planning of spaceborne, airborne and ground observation resources have been designed. Based on the analysis of the characteristics of major disaster observation tasks, the key technologies of spaceborne, airborne and ground collaborative observation project are studied. For different disaster response levels, the corresponding workflow tasks are designed. On the basis of satisfying different types of disaster monitoring demands, the existing multi-satellite collaborative observation planning algorithms are compared, analyzed, and optimized.


Sign in / Sign up

Export Citation Format

Share Document