scholarly journals Task Balanced Workflow Scheduling Technique considering Task Processing Rate in Spot Market

2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Daeyong Jung ◽  
JongBeom Lim ◽  
JoonMin Gil ◽  
Eunyoung Lee ◽  
Heonchang Yu

Recently, the cloud computing is a computing paradigm that constitutes an advanced computing environment that evolved from the distributed computing. And the cloud computing provides acquired computing resources in a pay-as-you-go manner. For example, Amazon EC2 offers the Infrastructure-as-a-Service (IaaS) instances in three different ways with different price, reliability, and various performances of instances. Our study is based on the environment using spot instances. Spot instances can significantly decrease costs compared to reserved and on-demand instances. However, spot instances give a more unreliable environment than other instances. In this paper, we propose the workflow scheduling scheme that reduces the out-of-bid situation. Consequently, the total task completion time is decreased. The simulation results reveal that, compared to various instance types, our scheme achieves performance improvements in terms of an average combined metric of 12.76% over workflow scheme without considering the processing rate. However, the cost in our scheme is higher than an instance with low performance and is lower than an instance with high performance.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Fanghai Gong

In recent years, cloud workflow task scheduling has always been an important research topic in the business world. Cloud workflow task scheduling means that the workflow tasks submitted by users are allocated to appropriate computing resources for execution, and the corresponding fees are paid in real time according to the usage of resources. For most ordinary users, they are mainly concerned with the two service quality indicators of workflow task completion time and execution cost. Therefore, how cloud service providers design a scheduling algorithm to optimize task completion time and cost is a very important issue. This paper proposes research on workflow scheduling based on mobile cloud computing machine learning, and this paper conducts research by using literature research methods, experimental analysis methods, and other methods. This article has deeply studied mobile cloud computing, machine learning, task scheduling, and other related theories, and a workflow task scheduling system model was established based on mobile cloud computing machine learning from different algorithms used in processing task completion time, task service costs, task scheduling, and resource usage The situation and the influence of different tasks on the experimental results are analyzed in many aspects. The algorithm in this paper speeds up the scheduling time by about 7% under a different number of tasks and reduces the scheduling cost by about 2% compared with other algorithms. The algorithm in this paper has been obviously optimized in time scheduling and task scheduling.


Author(s):  
Mais Haj Qasem ◽  
Alaa Abu-Srhan ◽  
Hutaf Natoureah ◽  
Esra Alzaghoul

Fog-computing is a new network architecture and computing paradigm that uses user or near-users devices (network edge) to carry out some processing tasks. Accordingly, it extends the cloud computing with more flexibility the one found in the ubiquitous networks. A smart city based on the concept of fog-computing with flexible hierarchy is proposed in this paper. The aim of the proposed design is to overcome the limitations of the previous approaches, which depends on using various network architectures, such as cloud-computing, autonomic network architecture and ubiquitous network architecture. Accordingly, the proposed approach achieves a reduction of the latency of data processing and transmission with enabled real-time applications, distribute the processing tasks over edge devices in order to reduce the cost of data processing and allow collaborative data exchange among the applications of the smart city. The design is made up of five major layers, which can be increased or merged according to the amount of data processing and transmission in each application. The involved layers are connection layer, real-time processing layer, neighborhood linking layer, main-processing layer, data server layer. A case study of a novel smart public car parking, traveling and direction advisor is implemented using IFogSim and the results showed that reduce the delay of real-time application significantly, reduce the cost and network usage compared to the cloud-computing paradigm. Moreover, the proposed approach, although, it increases the scalability and reliability of the users’ access, it does not sacrifice much time, nor cost and network usage compared to fixed fog-computing design.


Author(s):  
Dazhong Wu ◽  
Xi Liu ◽  
Steve Hebert ◽  
Wolfgang Gentzsch ◽  
Janis Terpenny

Cloud computing is an innovative computing paradigm that can potentially bridge the gap between increasing computing demands in computer aided engineering (CAE) applications and limited scalability, flexibility, and agility in traditional computing paradigms. In light of the benefits of cloud computing, high performance computing (HPC) in the cloud has the potential to enable users to not only accelerate computationally expensive CAE simulations (e.g., finite element analysis), but also to reduce costs by utilizing on-demand and scalable cloud computing resources. The objective of this research is to evaluate the performance of running a large finite element simulation in a public cloud. Specifically, an experiment is performed to identify individual and interactive effects of several factors (e.g., CPU core count, memory size, solver computational rate, and input/output rate) on run time using statistical methods. Our experimental results have shown that the performance of HPC in the cloud is sufficient for the application of a large finite element analysis, and that run time can be optimized by properly selecting a configuration of CPU, memory, and interconnect.


2012 ◽  
Vol 8 (4) ◽  
pp. 102 ◽  
Author(s):  
Claudia Canali ◽  
Riccardo Lancellotti

The recent growth in demand for modern applicationscombined with the shift to the Cloud computing paradigm have led to the establishment of large-scale cloud data centers. The increasing size of these infrastructures represents a major challenge in terms of monitoring and management of the system resources. Available solutions typically consider every Virtual Machine (VM) as a black box each with independent characteristics, and face scalability issues by reducing the number of monitored resource samples, considering in most cases only average CPU usage sampled at a coarse time granularity. We claim that scalability issues can be addressed by leveraging thesimilarity between VMs in terms of resource usage patterns.In this paper we propose an automated methodology to cluster VMs depending on the usage of multiple resources, both systemand network-related, assuming no knowledge of the services executed on them. This is an innovative methodology that exploits the correlation between the resource usage to cluster together similar VMs. We evaluate the methodology through a case study with data coming from an enterprise datacenter, and we show that high performance may be achieved in automatic VMs clustering. Furthermore, we estimate the reduction in the amount of data collected, thus showing that our proposal may simplify the monitoring requirements and help administrators totake decisions on the resource management of cloud computing datacenters.


Author(s):  
Toan Phan Thanh ◽  
Loc Nguyen The ◽  
Said Elnaffar ◽  
Cuong Nguyen Doan ◽  
Huu Dang Quoc

The Cloud is a computing platform that provides on-demand access to a shared pool of configurable resources such as networks, servers and storage that can be rapidly provisioned and released with minimal management effort from clients. At its core, Cloud computing focuses on maximizing the effectiveness of the shared resources. Therefore, workflow scheduling is one of the challenges that the Cloud must tackle especially if a large number of tasks are executed on geographically distributed servers. This entails the need to adopt an effective scheduling algorithm in order to minimize task completion time (makespan). Although workflow scheduling has been the focus of many researchers, a handful efficient solutions have been proposed for Cloud computing. In this paper, we propose the LPSO, a novel algorithm for workflow scheduling problem that is based on the Particle Swarm Optimization method. Our proposed algorithm not only ensures a fast convergence but also prevents getting trapped in local extrema. We ran realistic scenarios using CloudSim and found that LPSO is superior to previously proposed algorithms and noticed that the deviation between the solution found by LPSO and the optimal solution is negligible.


Author(s):  
Jennifer S. Raj

Edge computing is a new computing paradigm that is rapidly emerging in various fields. Task completion is performed by various edge devices with distributed cloud computing in several conventional applications. Resource limitation, transmission efficiency, functionality and other edge network based circumstantial factors make this system more complex when compared to cloud computing. During cooperation between the edge devices, an instability occurs that cannot be ignored. The edge cooperative network is optimized with a novel framework proposed in this paper. This helps in improving the efficiency of edge computing tasks. The cooperation evaluation metrics are defined in the initial stage. Further, the performance of specific tasks are improved by optimizing the edge network cooperation. Real datasets obtained from elderly people and their wearable sensors is used for demonstrating the performance of the proposed framework. The extensive experimentation also helps in validating the efficiency of the proposed optimization algorithm.


2013 ◽  
Vol 303-306 ◽  
pp. 1391-1394
Author(s):  
Jing Liu ◽  
Xing Guo Luo ◽  
Bai Nan Li

Cloud computing is a new computing and business paradigm with flexible and powerful computational architecture to offer universal services to users via Internet. The performance of the scheduling system influences the cost benefit of this computing paradigm. Thus, jobs should be scheduled efficiently to reduce the execution cost and time. In this paper, we present an intelligent scheduling system, which considers both the requirements of different service requests and the circumstances of the computing infrastructure which consists of various resource, then, the main components of the system are introduced in detail, at last, the conclusions are drawn and the further research directions of the scheduling systems are pointed out.


Kybernetes ◽  
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ambika Aggarwal ◽  
Priti Dimri ◽  
Amit Agarwal ◽  
Ashutosh Bhatt

Purpose In general, cloud computing is a model of on-demand business computing that grants a convenient access to shared configurable resources on the internet. With the increment of workload and difficulty of tasks that are submitted by cloud consumers; “how to complete these tasks effectively and rapidly with limited cloud resources?” is becoming a challenging question. The major point of a task scheduling approach is to identify a trade-off among user needs and resource utilization. However, tasks that are submitted by varied users might have diverse needs of computing time, memory space, data traffic, response time, etc. This paper aims to proposes a new way of task scheduling. Design/methodology/approach To make the workflow completion in an efficient way and to reduce the cost and flow time, this paper proposes a new way of task scheduling. Here, a self-adaptive fruit fly optimization algorithm (SA-FFOA) is used for scheduling the workflow. The proposed multiple workflow scheduling model compares its efficiency over conventional methods in terms of analysis such as performance analysis, convergence analysis and statistical analysis. From the outcome of the analysis, the betterment of the proposed approach is proven with effective workflow scheduling. Findings The proposed algorithm is more superior regarding flow time with the minimum value, and the proposed model is enhanced over FFOA by 0.23%, differential evolution by 2.48%, artificial bee colony (ABC) by 2.85%, particle swarm optimization (PSO) by 2.46%, genetic algorithm (GA) by 2.33% and expected time to compute (ETC) by 2.56%. While analyzing the make span case, the proposed algorithm is 0.28%, 0.15%, 0.38%, 0.20%, 0.21% and 0.29% better than the conventional methods such as FFOA, DE, ABC, PSO, GA and ETC, respectively. Moreover, the proposed model has attained less cost, which is 2.14% better than FFOA, 2.32% better than DE, 3.53% better than ABC, 2.43% better than PSO, 2.07% better than GA and 2.90% better than ETC, respectively. Originality/value This paper presents a new way of task scheduling for making the workflow completion in an efficient way and for reducing the cost and flow time. This is the first paper uses SA-FFOA for scheduling the workflow.


Cloud computing can be defined as a computing paradigm, where the various systems and large pool are connected to each other in private or public networks. The aim for that is to provide a dynamically scalable infrastructure, where it is used for applications, data and file storage. Cloud computing reduced the cost of computation and application hosting so that content storage and delivering services are handled faster and more flexibility. Load balancing is one of the challenges that affect the performance of cloud computing and the overcome it leads to better resource utilization and response time. The service broker policy plays an important role in accelerating the response time of customer requests by locating data centers or optimize the pattern of access to them. The contribution of this paper investigates the effectiveness of using the different algorithms and the approaches to improve the performance of cloud computing as it has been shown that there is a possibility to increase the performance of cloud computing by relying on certain criteria described in this paper. The results, which are presented in this paper were obtained using the cloud analyst simulator, where this simulator contains (Time duration, Load balancing algorithms, and Service Broker Algorithms, etc).


Sign in / Sign up

Export Citation Format

Share Document