scientific workflow
Recently Published Documents


TOTAL DOCUMENTS

608
(FIVE YEARS 141)

H-INDEX

35
(FIVE YEARS 4)

2021 ◽  
Vol 5 (6) ◽  
pp. 56-60
Author(s):  
Guimei Zhang ◽  
Yingzi Yuan

Objective: To strengthen personnel management for a clean operating room and ensure an automatic, intelligent, and scientific workflow. Methods: The medical behavior management system has been implemented to monitor and manage medical personnel entering and exiting the operating room, so as to meet the standard requirements of the operating room. Results: The flow of personnel has been controlled effectively, the flow in and out of the operating room has been optimized, the management level of the operating room has improved, and the cost has been cut down. Conclusion: With the advent of the information age and the continuous improvement of the management system, the management of operating rooms has become more reasonable and humanized; the management mode, working environment, and the overall quality of nursing work in operating rooms have improved.


2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


2021 ◽  
Author(s):  
◽  
Vahid Arabnejad

<p>Basic science is becoming ever more computationally intensive, increasing the need for large-scale compute and storage resources, be they within a High-Performance Computer cluster, or more recently, within the cloud. Commercial clouds have increasingly become a viable platform for hosting scientific analyses and computation due to their elasticity, recent introduction of specialist hardware, and pay-as-you-go cost model. This computing paradigm therefore presents a low capital and low barrier alternative to operating dedicated eScience infrastructure. Indeed, commercial clouds now enable universal access to capabilities previously available to only large well funded research groups. While the potential benefits of cloud computing are clear, there are still significant technical hurdles associated with obtaining the best execution efficiency whilst trading off cost. In most cases, large scale scientific computation is represented as a workflow for scheduling and runtime provisioning. Such scheduling becomes an even more challenging problem on cloud systems due to the dynamic nature of the cloud, in particular, the elasticity, the pricing models (both static and dynamic), the non-homogeneous resource types and the vast array of services. This mapping of workflow tasks onto a set of provisioned instances is an example of the general scheduling problem and is NP-complete. In addition, certain runtime constraints, the most typical being the cost of the computation and the time which that computation requires to complete, must be met. This thesis addresses 'the scientific workflow scheduling problem in cloud', which is to schedule workflow tasks on cloud resources in a way that users meet their defined constraints such as budget and deadline, and providers maximize profits and resource utilization. Moreover, it explores different mechanisms and strategies for distributing defined constraints over a workflow and investigate its impact on the overall cost of the resulting schedule.</p>


2021 ◽  
Author(s):  
KARPAGAM M

Abstract An inevitable part of the cloud computing environment is virtualization, as it can multiplex or combine many virtual machines in a single physical machine, and simultaneously an isolated environment is provided to every virtual machine. An important issue in cloud computing is workflow scheduling, which maps tasks of workflow to VMs based on various functional and non-functional requisites. Workflow scheduling is an NP-hard optimization problem and it is quite hard to achieve an optimal schedule. Metaheuristic algorithms helped in solving the problem of cloud task scheduling and this was compared to other heuristics. Reactive Search (RSO) and its structure will consist of a local heuristic based on a certain neighborhood complemented by making use of a memory-based mechanism. The Shuffled Frog Leaping Algorithm (SFLA) is based on swarm evolution that imitates information exchange divided into memeplexes when searching for food. This paper proposes a new set of optimization heuristics along with hybrid optimizations (RSO - SFLA) to solve problems in combinatorial optimization.


Author(s):  
Mirsaeid Hosseini Shirvani ◽  
Reza Noorian Talouki

AbstractScheduling of scientific workflows on hybrid cloud architecture, which contains private and public clouds, is a challenging task because schedulers should be aware of task inter-dependencies, underlying heterogeneity, cost diversity, and virtual machine (VM) variable configurations during the scheduling process. On the one side, reaching a minimum total execution time or makespan is a favorable issue for users whereas the cost of utilizing quicker VMs may lead to conflict with their budget on the other side. Existing works in the literature scarcely consider VM’s monetary cost in the scheduling process but mainly focus on makespan. Therefore, in this paper, the problem of scientific workflow scheduling running on hybrid cloud architecture is formulated to a bi-objective optimization problem with makespan and monetary cost minimization viewpoint. To address this combinatorial discrete problem, this paper presents a hybrid bi-objective optimization based on simulated annealing and task duplication algorithms (BOSA-TDA) that exploits two important heuristics heterogeneous earliest finish time (HEFT) and duplication techniques to improve canonical SA. The extensive simulation results reported of running different well-known scientific workflows such as LIGO, SIPHT, Cybershake, Montage, and Epigenomics demonstrate that proposed BOSA-TDA has the amount of 12.5%, 14.5%, 17%, 13.5%, and 18.5% average improvement against other existing approaches in terms of makespan, monetary cost, speed up, SLR, and efficiency metrics, respectively.


2021 ◽  
Vol 7 ◽  
pp. e747
Author(s):  
Mazen Farid ◽  
Rohaya Latip ◽  
Masnida Hussin ◽  
Nor Asilah Wati Abdul Hamid

Background Recent technological developments have enabled the execution of more scientific solutions on cloud platforms. Cloud-based scientific workflows are subject to various risks, such as security breaches and unauthorized access to resources. By attacking side channels or virtual machines, attackers may destroy servers, causing interruption and delay or incorrect output. Although cloud-based scientific workflows are often used for vital computational-intensive tasks, their failure can come at a great cost. Methodology To increase workflow reliability, we propose the Fault and Intrusion-tolerant Workflow Scheduling algorithm (FITSW). The proposed workflow system uses task executors consisting of many virtual machines to carry out workflow tasks. FITSW duplicates each sub-task three times, uses an intermediate data decision-making mechanism, and then employs a deadline partitioning method to determine sub-deadlines for each sub-task. This way, dynamism is achieved in task scheduling using the resource flow. The proposed technique generates or recycles task executors, keeps the workflow clean, and improves efficiency. Experiments were conducted on WorkflowSim to evaluate the effectiveness of FITSW using metrics such as task completion rate, success rate and completion time. Results The results show that FITSW not only raises the success rate by about 12%, it also improves the task completion rate by 6.2% and minimizes the completion time by about 15.6% in comparison with intrusion tolerant scientific workflow ITSW system.


2021 ◽  
Author(s):  
Alaa Albtoush ◽  
Noor Maizura Mohamad Noor ◽  
Farizah Yunus
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7238
Author(s):  
Zulfiqar Ahmad ◽  
Ali Imran Jehangiri ◽  
Mohammed Alaa Ala’anzy ◽  
Mohamed Othman ◽  
Arif Iqbal Umar

Cloud computing is a fully fledged, matured and flexible computing paradigm that provides services to scientific and business applications in a subscription-based environment. Scientific applications such as Montage and CyberShake are organized scientific workflows with data and compute-intensive tasks and also have some special characteristics. These characteristics include the tasks of scientific workflows that are executed in terms of integration, disintegration, pipeline, and parallelism, and thus require special attention to task management and data-oriented resource scheduling and management. The tasks executed during pipeline are considered as bottleneck executions, the failure of which result in the wholly futile execution, which requires a fault-tolerant-aware execution. The tasks executed during parallelism require similar instances of cloud resources, and thus, cluster-based execution may upgrade the system performance in terms of make-span and execution cost. Therefore, this research work presents a cluster-based, fault-tolerant and data-intensive (CFD) scheduling for scientific applications in cloud environments. The CFD strategy addresses the data intensiveness of tasks of scientific workflows with cluster-based, fault-tolerant mechanisms. The Montage scientific workflow is considered as a simulation and the results of the CFD strategy were compared with three well-known heuristic scheduling policies: (a) MCT, (b) Max-min, and (c) Min-min. The simulation results showed that the CFD strategy reduced the make-span by 14.28%, 20.37%, and 11.77%, respectively, as compared with the existing three policies. Similarly, the CFD reduces the execution cost by 1.27%, 5.3%, and 2.21%, respectively, as compared with the existing three policies. In case of the CFD strategy, the SLA is not violated with regard to time and cost constraints, whereas it is violated by the existing policies numerous times.


Author(s):  
Hindol Bhattacharya ◽  
Matangini Chattopadhyay ◽  
Samiran Chattopadhay

Sign in / Sign up

Export Citation Format

Share Document