minimum completion time
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 2)

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1320
Author(s):  
Vijay Prakash ◽  
Seema Bawa ◽  
Lalit Garg

Workflow scheduling is one of the significant issues for scientific applications among virtual machine migration, database management, security, performance, fault tolerance, server consolidation, etc. In this paper, existing time-based scheduling algorithms, such as first come first serve (FCFS), min–min, max–min, and minimum completion time (MCT), along with dependency-based scheduling algorithm MaxChild have been considered. These time-based scheduling algorithms only compare the burst time of tasks. Based on the burst time, these schedulers, schedule the sub-tasks of the application on suitable virtual machines according to the scheduling criteria. During this process, not much attention was given to the proper utilization of the resources. A novel dependency and time-based scheduling algorithm is proposed that considers the parent to child (P2C) node dependencies, child to parent node dependencies, and the time of different tasks in the workflows. The proposed P2C algorithm emphasizes proper utilization of the resources and overcomes the limitations of these time-based schedulers. The scientific applications, such as CyberShake, Montage, Epigenomics, Inspiral, and SIPHT, are represented in terms of the workflow. The tasks can be represented as the nodes, and relationships between the tasks can be represented as the dependencies in the workflows. All the results have been validated by using the simulation-based environment created with the help of the WorkflowSim simulator for the cloud environment. It has been observed that the proposed approach outperforms the mentioned time and dependency-based scheduling algorithms in terms of the total execution time by efficiently utilizing the resources.


2021 ◽  
Vol 50 (1) ◽  
pp. 5-12
Author(s):  
Hani Alquhayz ◽  
Mahdi Jemmali

This paper focuses on the maximization of the minimum completion time on identical parallel processors. The objective of this maximization is to ensure fair distribution. Let a set of jobs to be assigned to several identical parallel processors. This problem is shown as NP-hard. The research work of this paper is based essentially on the comparison of the proposed heuristics with others cited in the literature review. Our heuristics are developed using essentially the randomization method and the iterative utilization of the knapsack problem to solve the studied problem. Heuristics are assessed by several instances represented in the experimental results. The results show that the knapsack based heuristic gives almost a similar performance than heuristic in a literature review but in better running time.  


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahfooz Alam ◽  
Mahak ◽  
Raza Abbas Haidri ◽  
Dileep Kumar Yadav

Purpose Cloud users can access services at anytime from anywhere in the world. On average, Google now processes more than 40,000 searches every second, which is approximately 3.5 billion searches per day. The diverse and vast amounts of data are generated with the development of next-generation information technologies such as cryptocurrency, internet of things and big data. To execute such applications, it is needed to design an efficient scheduling algorithm that considers the quality of service parameters like utilization, makespan and response time. Therefore, this paper aims to propose a novel Efficient Static Task Allocation (ESTA) algorithm, which optimizes average utilization. Design/methodology/approach Cloud computing provides resources such as virtual machine, network, storage, etc. over the internet. Cloud computing follows the pay-per-use billing model. To achieve efficient task allocation, scheduling algorithm problems should be interacted and tackled through efficient task distribution on the resources. The methodology of ESTA algorithm is based on minimum completion time approach. ESTA intelligently maps the batch of independent tasks (cloudlets) on heterogeneous virtual machines and optimizes their utilization in infrastructure as a service cloud computing. Findings To evaluate the performance of ESTA, the simulation study is compared with Min-Min, load balancing strategy with migration cost, Longest job in the fastest resource-shortest job in the fastest resource, sufferage, minimum completion time (MCT), minimum execution time and opportunistic load balancing on account of makespan, utilization and response time. Originality/value The simulation result reveals that the ESTA algorithm consistently superior performs under varying of batch independent of cloudlets and the number of virtual machines’ test conditions.


2020 ◽  
Vol 37 ◽  
pp. 59-68
Author(s):  
Maheta Ashish ◽  
Samrat V.O. Khanna

Cloud computing is provides resource allocation which facilitates the cloud resource provider responsible to the cloud consumers. The main objective of resource manager is to assign the dynamic resource to the task in the execution and measures response time, execution cost, resource utilization and system performance. The resource manager is optimizing the resource and measure the completion time for assign resource. The resource manager is also measure to execute the resource in the optimal way to complete the task in minimum completion time. The virtualization is techniques mandatory to allocate the dynamic resource depends on the users need. There are also green computing techniques involved for enhanced the no of server. The skewness is basically used to enhance the quality of service using the various parameters. The proposed algorithms are considered to allocate the cloud resource as per the users requirement. The advantage of proposed algorithm is to view the analysis of cpu utilization and also reduced the memory usage.


Author(s):  
Mekonnen Redi ◽  
Mohammad Ikram

The traditional dimensionality reduction methods can be generally classified into Feature Extraction (FE) and Feature Selection (FS) approaches. The classical FE algorithms are generally classified into linear and nonlinear algorithms. Linear algorithms such as Principal Component Analysis (PCA) aim to project high dimensional data to a lower-dimensional space by linear transformations according to certain criteria. The central idea of PCA is to reduce the dimensionality of the data set consisting of a large number of variables. In this paper, PCA was used to reduce the dimension of flow shop scheduling problems. This mathematical procedure transforms a number of (possibly) correlated jobs into a smaller number of uncorrelated jobs called principal components, which are the linear combinations of the original jobs. These jobs are carefully determined so that from the solution of the reduced problem multiple solutions of the original high dimensional problem can readily be obtained, or completely characterized, without actually listing the optimal solution(s). The results show that by fixing only some critical jobs at the beginnings and ends of the sequence using Johnson's method, the remaining jobs could be arranged in an arbitrary order in the remaining gap without violating the optimality condition that also guarantees minimum completion time. In this regard, Johnson's method was relaxed by terminating the listing of jobs at the first/last available positions when the job with minimum processing time on either machine attains the highest processing time on the other machine for the first time. By terminating Johnson's algorithm at an early stage, the method minimizes the time needed for sequencing those jobs that could be left arbitrarily. By allowing these jobs to be arranged in arbitrary order it gives job sequencing freedom for job operators without affecting minimum completion time. The results of the study were originally obtained for deterministic scheduling problems but they are more relevant on test problems randomly generated from uniform distribution  with lower bound  and upper bound  and normal distribution  with mean  and standard deviation . 


Author(s):  
D Suresh Kumar ◽  
R Jagadeesh Kannan

Multi-tenancy is an essential feature in cloud computing and is a major component to achieve scalability and energy-efficient solution to gain high level of economic benefits. As the cloud, computing is gaining more audiences and high user base, the problem of scheduling the computational workflow for multi-tenant cloud scheduling is becoming a difficult task to achieve. In this study, we present a learning-based scheduler for catering heterogeneous software and hardware resources in the context of multi-tenant cloud computing. The experimentation has been carried out with the help of green cloud simulator and the results are compared with the state of the art techniques like minimum completion time, first come first serve and backfilling. The experimental results exhibit that the presented algorithm provides an effective means of utilizing cloud resources in addition with drastic reduction in cost of operation.


Author(s):  
Sudha Narang ◽  
Puneet Goswami ◽  
Anurag Jain

Background: The field of cloud computing has been evolving for over a decade now. Load balancing is an important component of cloud computing. Load balancing implies scheduling of cloudlets (tasks) on virtual machines. Since this is a NP-hard problem, various heuristics for load balancing have been proposed in the research literature. The heuristics have been categorized, simulated and benchmarked in various ways; however, the information is scattered across many review articles. Objective: This review aims to bring a broad range of load balancing heuristics found in the research literature under one umbrella. It includes a comprehensive list of heuristics, a holistic set of criteria for their classification, and some key performance metrics and simulation tools used for their benchmarking. An illustration of fair and comprehensive comparison of heuristics is provided using CloudSim Plus, a recent and advanced simulation tool. Method: The simulations performed with CloudSim Plus employ a generic model of task and machine heterogeneity with Poisson arrival of cloudlets and exponential distribution of cloudlet length to emulate actual cloud-computing scenarios. The simulation results in terms of key performance metrics are used to compare four centralized load balancing heuristics including Join Shortest Queue (JSQ), Join Idle Queue (JIQ), Round Robin and Minimum Completion Time (MCT).


Author(s):  
Xiaojin Ma ◽  
Honghao Gao ◽  
Huahu Xu ◽  
Minjie Bian

Abstract Large-scale applications of Internet of things (IoT), which require considerable computing tasks and storage resources, are increasingly deployed in cloud environments. Compared with the traditional computing model, characteristics of the cloud such as pay-as-you-go, unlimited expansion, and dynamic acquisition represent different conveniences for these applications using the IoT architecture. One of the major challenges is to satisfy the quality of service requirements while assigning resources to tasks. In this paper, we propose a deadline and cost-aware scheduling algorithm that minimizes the execution cost of a workflow under deadline constraints in the infrastructure as a service (IaaS) model. Considering the virtual machine (VM) performance variation and acquisition delay, we first divide tasks into different levels according to the topological structure so that no dependency exists between tasks at the same level. Three strings are used to code the genes in the proposed algorithm to better reflect the heterogeneous and resilient characteristics of cloud environments. Then, HEFT is used to generate individuals with the minimum completion time and cost. Novel schemes are developed for crossover and mutation to increase the diversity of the solutions. Based on this process, a task scheduling method that considers cost and deadlines is proposed. Experiments on workflows that simulate the structured tasks of the IoT demonstrate that our algorithm achieves a high success rate and performs well compared to state-of-the-art algorithms.


2019 ◽  
Vol 8 (3) ◽  
pp. 1863-1870 ◽  

Resource allocation (RA) is a significant aspect of Cloud Computing. The Cloud resource manager is responsible to assign available resources to the tasks for execution in an effective way that improves system performance, reduce response time, lessen makespan and utilize resources efficiently. To fulfil these objectives, an effective Tasks Scheduling algorithm is required. The standard Max-Min and Min-Min Task Scheduling algorithms are not able to produce better makespan and effective resource utilization. In this paper, a Resource-Aware Min-Min (RAMM) Algorithm is proposed based on basic Min-Min algorithm. The proposed RAMM Algorithm selects shortest execution time task and assigns it to the resource which takes shortest completion time. If minimum completion time resource is busy, then the RAMM Algorithm selects next minimum completion time resource to reduce waiting time of the task and improve resource utilization. The experiment results show that the proposed RAMM Algorithm produces better makespan and load balance than Max-Min, Min-Min and improved Max-Min Algorithms.


Sign in / Sign up

Export Citation Format

Share Document