Self adaptive fruit fly algorithm for multiple workflow scheduling in cloud computing environment

Kybernetes ◽  
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ambika Aggarwal ◽  
Priti Dimri ◽  
Amit Agarwal ◽  
Ashutosh Bhatt

Purpose In general, cloud computing is a model of on-demand business computing that grants a convenient access to shared configurable resources on the internet. With the increment of workload and difficulty of tasks that are submitted by cloud consumers; “how to complete these tasks effectively and rapidly with limited cloud resources?” is becoming a challenging question. The major point of a task scheduling approach is to identify a trade-off among user needs and resource utilization. However, tasks that are submitted by varied users might have diverse needs of computing time, memory space, data traffic, response time, etc. This paper aims to proposes a new way of task scheduling. Design/methodology/approach To make the workflow completion in an efficient way and to reduce the cost and flow time, this paper proposes a new way of task scheduling. Here, a self-adaptive fruit fly optimization algorithm (SA-FFOA) is used for scheduling the workflow. The proposed multiple workflow scheduling model compares its efficiency over conventional methods in terms of analysis such as performance analysis, convergence analysis and statistical analysis. From the outcome of the analysis, the betterment of the proposed approach is proven with effective workflow scheduling. Findings The proposed algorithm is more superior regarding flow time with the minimum value, and the proposed model is enhanced over FFOA by 0.23%, differential evolution by 2.48%, artificial bee colony (ABC) by 2.85%, particle swarm optimization (PSO) by 2.46%, genetic algorithm (GA) by 2.33% and expected time to compute (ETC) by 2.56%. While analyzing the make span case, the proposed algorithm is 0.28%, 0.15%, 0.38%, 0.20%, 0.21% and 0.29% better than the conventional methods such as FFOA, DE, ABC, PSO, GA and ETC, respectively. Moreover, the proposed model has attained less cost, which is 2.14% better than FFOA, 2.32% better than DE, 3.53% better than ABC, 2.43% better than PSO, 2.07% better than GA and 2.90% better than ETC, respectively. Originality/value This paper presents a new way of task scheduling for making the workflow completion in an efficient way and for reducing the cost and flow time. This is the first paper uses SA-FFOA for scheduling the workflow.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Redwan A. Al-dilami ◽  
Ammar T. Zahary ◽  
Adnan Z. Al-Saqqaf

Issues of task scheduling in the centre of cloud computing are becoming more important, and the cost is one of the most important parameters used for scheduling tasks. This study aims to investigate the problem of online task scheduling of the identified job of MapReduce on cloud computing infrastructure. It was proposed that the virtualized cloud computing setup comprised machines that host multiple identical virtual machines (VMs) that need to be activated earlier and run continuously, and booting a VM requires a constant setup time. A VM that remains running even though it is no longer used is considered an idle VM. Furthermore, this study aims to distribute the idle cost of the VMs rather than the cost of setting up them among tasks in a fair manner. This study also is an extension of previous studies which solved the problems that occurred when distributing the idle cost and setting up the cost of VMs among tasks. It classifies the tasks into three groups (long, mid, and short) and distributes the idle cost among the groups then among the tasks of the groups. The main contribution of this paper is the developing of a clairvoyant algorithm that addressed important factors such as the delay and the cost that occurred by waiting to setup VM (active VM). Also, when the VMs are run continually and some VMs become in idle state, the idle cost will be distributed among the current tasks in a fair manner. The results of this study, in comparison with previous studies, showed that the idle cost and the setup cost that was distributed among tasks were better than the idle cost and the setup cost distributed in those studies.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Fanghai Gong

In recent years, cloud workflow task scheduling has always been an important research topic in the business world. Cloud workflow task scheduling means that the workflow tasks submitted by users are allocated to appropriate computing resources for execution, and the corresponding fees are paid in real time according to the usage of resources. For most ordinary users, they are mainly concerned with the two service quality indicators of workflow task completion time and execution cost. Therefore, how cloud service providers design a scheduling algorithm to optimize task completion time and cost is a very important issue. This paper proposes research on workflow scheduling based on mobile cloud computing machine learning, and this paper conducts research by using literature research methods, experimental analysis methods, and other methods. This article has deeply studied mobile cloud computing, machine learning, task scheduling, and other related theories, and a workflow task scheduling system model was established based on mobile cloud computing machine learning from different algorithms used in processing task completion time, task service costs, task scheduling, and resource usage The situation and the influence of different tasks on the experimental results are analyzed in many aspects. The algorithm in this paper speeds up the scheduling time by about 7% under a different number of tasks and reduces the scheduling cost by about 2% compared with other algorithms. The algorithm in this paper has been obviously optimized in time scheduling and task scheduling.


2019 ◽  
Vol 37 (6/7) ◽  
pp. 1087-1111 ◽  
Author(s):  
Avinash Kumar Shrivastava ◽  
Nitin Sachdeva

Purpose Almost everything around us is the output of software-driven machines or working with software. Software firms are working hard to meet the user’s requirements. But developing a fault-free software is not possible. Also due to market competition, firms do not want to delay their software release. But early release software comes with the problem of user reporting more failures during operations due to more number of faults lying in it. To overcome the above situation, software firms these days are releasing software with an adequate amount of testing instead of delaying the release to develop reliable software and releasing software patches post release to make the software more reliable. The paper aims to discuss these issues. Design/methodology/approach The authors have developed a generalized framework by assuming that testing continues beyond software release to determine the time to release and stop testing of software. As the testing team is always not skilled, hence, the rate of detection correction of faults during testing may change over time. Also, they may commit an error during software development, hence increasing the number of faults. Therefore, the authors have to consider these two factors as well in our proposed model. Further, the authors have done sensitivity analysis based on the cost-modeling parameters to check and analyze their impact on the software testing and release policy. Findings From the proposed model, the authors found that it is better to release early and continue testing in the post-release phase. By using this model, firms can get the benefits of early release, and at the same time, users get the benefit of post-release software reliability assurance. Originality/value The authors are proposing a generalized model for software scheduling.


2013 ◽  
Vol 66 (4) ◽  
pp. 513-519 ◽  
Author(s):  
Halim Cevizci

Drill cuttings are generally used in open pits and quarries as the most common stemming material, since these are most readily available at blast sites. The plaster stemming method has been found to be better than the drill cuttings stemming method, due to increased confinement inside the hole and better utilization of blast explosive energy in the rock. The main advantage of the new stemming method is the reduction in the cost of blasting. At a limestone quarry, blasting costs per unit volume of rock were reduced by 7%. This is obtained by increasing burden and spacing distances. In addition, better fragmentation was obtained by using the plaster stemming method. Blast trials showed that plaster stemming produced finer material than the conventional methods. In the same blast tests, +20 cm size fragments reduced to 42.6% of the total, compared to 48.7% in the conventional method of drill cuttings stemming.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mehmet Emin Yildiz ◽  
Yaman Omer Erzurumlu ◽  
Bora Kurtulus

PurposeThe beta coefficient used for the cost of equity calculation is at the heart of the valuation process. This study conducts comparative analyses of the classical capital asset pricing model (CAPM) and downside CAPM risk parameters to gain further insight into which risk parameter leads to better performing risk measures at explaining stock returns.Design/methodology/approachThe study conducts a comparative analysis of 16 risk measures at explaining the stock returns of 4531 companies of 20 developed and 25 emerging market index for 2000–2018. The analyses are conducted using both the global and local indices and both USD and local currency returns. Calculated risk measures are analyzed in a panel data setup using a univariate model. Results are investigated in country-specific and model-specific subsets.FindingsThe results show that (1) downside betas are better than CAPM betas at explaining the stock returns, (2) both risk measure groups perform better for emerging markets, (3) global downside beta model performs better than global beta model, implying the existence of the contagion effect, (4) high significance levels of total risk and unsystematic risk measures further support the shortfall of CAPM betas and (5) higher correlation of markets after negative shocks such as pandemics puts global CAPM based downside beta to a more reliable position.Research limitations/implicationsThe data are limited to the index securities as beta could be time varying.Practical implicationsResults overall provide insight into the cost of equity calculation and emerging market assets valuation.Originality/valueThe framework and methodology enable us to compare and contrast CAPM and downside-CAPM risk measures at the firm level, at the global/local level and in terms of the level of market development.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Daeyong Jung ◽  
JongBeom Lim ◽  
JoonMin Gil ◽  
Eunyoung Lee ◽  
Heonchang Yu

Recently, the cloud computing is a computing paradigm that constitutes an advanced computing environment that evolved from the distributed computing. And the cloud computing provides acquired computing resources in a pay-as-you-go manner. For example, Amazon EC2 offers the Infrastructure-as-a-Service (IaaS) instances in three different ways with different price, reliability, and various performances of instances. Our study is based on the environment using spot instances. Spot instances can significantly decrease costs compared to reserved and on-demand instances. However, spot instances give a more unreliable environment than other instances. In this paper, we propose the workflow scheduling scheme that reduces the out-of-bid situation. Consequently, the total task completion time is decreased. The simulation results reveal that, compared to various instance types, our scheme achieves performance improvements in terms of an average combined metric of 12.76% over workflow scheme without considering the processing rate. However, the cost in our scheme is higher than an instance with low performance and is lower than an instance with high performance.


2016 ◽  
Vol 116 (6) ◽  
pp. 1160-1177 ◽  
Author(s):  
Sen Liu ◽  
Yang Yang ◽  
Wen Guang Qu ◽  
Yuan Liu

Purpose – The purpose of this paper is to focus on the value creation potential of cloud computing in inter-firm partnerships. It examines how cloud-based IT infrastructure capabilities in flexibility and integration contribute to partnering agility and, consequently, firm performance. This study also introduces business lifecycle and market turbulence as internal and external context variables, respectively, to investigate the different roles of cloud computing in value creation. Design/methodology/approach – A questionnaire was used to collect data from 184 client firms of the largest cloud computing services provider in China (Alibaba Cloud). The theoretical model was tested using PLS analysis. Findings – Cloud infrastructure (CI) flexibility has a positive effect on partnering agility, while the effect of CI integration on partnering agility is moderated by business lifecycle and market turbulence. Research limitations/implications – The surveyed firms are all Alibaba Cloud clients, which may limit the generalization of the findings. Practical implications – The study suggests that besides the cost benefits, the value creation aspect of cloud computing should also be emphasized in research and practice. The study provides a new perspective to understand the business value of cloud computing in inter-firm partnerships. Originality/value – The study suggests that the flexibility-related and integration-related features of cloud computing can create value for firms by facilitating inter-firm collaboration in exploiting business opportunities.


Nowadays, with the huge development of information and computing technologies, the cloud computing is becoming the highly scalable and widely computing technology used in the world that bases on pay-per-use, remotely access, Internet-based and on-demand concepts in which providing customers with a shared of configurable resources. But, with the highly incoming user’s requests, the task scheduling and resource allocation are becoming major requirements for efficient and effective load balancing of a workload among cloud resources to enhance the overall cloud system performance. For these reasons, various types of task scheduling algorithms are introduced such as traditional, heuristic, and meta-heuristic. A heuristic task scheduling algorithms like MET, MCT, Min-Min, and Max-Min are playing an important role for solving the task scheduling problem. This paper proposes a new hybrid algorithm in cloud computing environment that based on two heuristic algorithms; Min-Min and Max-Min algorithms. To evaluate this algorithm, the Cloudsim simulator has been used with different optimization parameters; makespan, average of resource utilization, load balancing, average of waiting time and concurrent execution between small length tasks and long size tasks. The results show that the proposed algorithm is better than the two algorithms Min-Min and Max-Min for those parameters


Author(s):  
S. Sharon Priya ◽  
K. M. Mehata ◽  
W. Aisha Banu

This paper proposes a fuzzy Manhattan distance-based similarity for gang formation of resources (FMDSGR) method with priority task scheduling in cloud computing. The proposed work decides which processor is to execute the current task in order to achieve efficient resource utilization and effective task scheduling. FMDSGR groups the resources into gangs which rely upon the similarity of resource characteristics in order to use the resources effectively. Then, the tasks are scheduled based on the priority in the gang of processors using gang-based priority scheduling (GPS). This reduces mainly the cost of deciding which processor is to execute the current task. Performance has been evaluated in terms of makespan, scheduling length ratio, speedup, efficiency and load balancing. CloudSim simulator is the toolkit used for simulation and for demonstrating experimental results in cloud computing environments.


Sign in / Sign up

Export Citation Format

Share Document