scholarly journals A Design of Integrated Scientific Workflow Execution Environment for A Computational Scientific Application

2012 ◽  
Vol 13 (1) ◽  
pp. 37-44 ◽  
Author(s):  
Seo-Young Kim ◽  
Kyoung-A Yoon ◽  
Yoon-Hee Kim
Author(s):  
Dukyun Nam ◽  
Junehawk Lee ◽  
Kum Won Cho

The efficient use of a scientific application service built on a computing environment requires technology that integrates each application service into a workflow so that the workflow is executed in a cooperative environment. There have been a number of attempts to automate research activities as a scientific workflow. However, there are practical problems in the full automation of research activities for a number of simulation programs and researchers. In the cyber environment for Collaborative and Distributed E-Research (CDER), the types of workflows need to be studied and supported separately and with different methodologies. In this chapter, the authors analyze the scientific research and education processes and categorize them into four types: simulation, experiment, collaborative work, and educational activity. They then describe the applications needed for each category. To justify their categorization of the CDER workflow, they examine the workflow of e-AIRS (e-Science Aerospace Integrated Research System), a problem-solving environment for aerospace research.


2011 ◽  
Vol 23 (16) ◽  
pp. 1951-1968 ◽  
Author(s):  
Lianyong Qi ◽  
Wenmin Lin ◽  
Wanchun Dou ◽  
Jian Jiang ◽  
Jinjun Chen

Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 169 ◽  
Author(s):  
Na Wu ◽  
Decheng Zuo ◽  
Zhan Zhang

Improving reliability is one of the major concerns of scientific workflow scheduling in clouds. The ever-growing computational complexity and data size of workflows present challenges to fault-tolerant workflow scheduling. Therefore, it is essential to design a cost-effective fault-tolerant scheduling approach for large-scale workflows. In this paper, we propose a dynamic fault-tolerant workflow scheduling (DFTWS) approach with hybrid spatial and temporal re-execution schemes. First, DFTWS calculates the time attributes of tasks and identifies the critical path of workflow in advance. Then, DFTWS assigns appropriate virtual machine (VM) for each task according to the task urgency and budget quota in the phase of initial resource allocation. Finally, DFTWS performs online scheduling, which makes real-time fault-tolerant decisions based on failure type and task criticality throughout workflow execution. The proposed algorithm is evaluated on real-world workflows. Furthermore, the factors that affect the performance of DFTWS are analyzed. The experimental results demonstrate that DFTWS achieves a trade-off between high reliability and low cost objectives in cloud computing environments.


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 124 ◽  
Author(s):  
Satrajit S. Ghosh ◽  
Jean-Baptiste Poline ◽  
David B. Keator ◽  
Yaroslav O. Halchenko ◽  
Adam G. Thomas ◽  
...  

Reproducible research is a key element of the scientific process. Re-executability of neuroimaging workflows that lead to the conclusions arrived at in the literature has not yet been sufficiently addressed and adopted by the neuroimaging community. In this paper, we document a set of procedures, which include supplemental additions to a manuscript, that unambiguously define the data, workflow, execution environment and results of a neuroimaging analysis, in order to generate a verifiable re-executable publication. Re-executability provides a starting point for examination of the generalizability and reproducibility of a given finding.


Sign in / Sign up

Export Citation Format

Share Document