Preemptive Scheduling for Two-Processor Systems

1988 ◽  
Vol 11 (1) ◽  
pp. 1-19
Author(s):  
Andrzej Rowicki

The purpose of the paper is to consider an algorithm for preemptive scheduling for two-processor systems with identical processors. Computations submitted to the systems are composed of dependent tasks with arbitrary execution times and contain no loops and have only one output. We assume that preemptions times are completely unconstrained, and preemptions consume no time. Moreover, the algorithm determines the total execution time of the computation. It has been proved that this algorithm is optimal, that is, the total execution time of the computation (schedule length) is minimized.

2021 ◽  
Vol 11 (3) ◽  
pp. 72-91
Author(s):  
Priyanka H. ◽  
Mary Cherian

Cloud computing has become more prominent, and it is used in large data centers. Distribution of well-organized resources (bandwidth, CPU, and memory) is the major problem in the data centers. The genetically enhanced shuffling frog leaping algorithm (GESFLA) framework is proposed to select the optimal virtual machines to schedule the tasks and allocate them in physical machines (PMs). The proposed GESFLA-based resource allocation technique is useful in minimizing the wastage of resource usage and also minimizes the power consumption of the data center. The proposed GESFL algorithm is compared with task-based particle swarm optimization (TBPSO) for efficiency. The experimental results show the excellence of GESFLA over TBPSO in terms of resource usage ratio, migration time, and total execution time. The proposed GESFLA framework reduces the energy consumption of data center up to 79%, migration time by 67%, and CPU utilization is improved by 9% for Planet Lab workload traces. For the random workload, the execution time is minimized by 71%, transfer time is reduced up to 99%, and the CPU consumption is improved by 17% when compared to TBPSO.


2011 ◽  
Vol 3 (1) ◽  
pp. 89-97 ◽  
Author(s):  
Amrit Agrawal ◽  
Pranay Chaudhuri

Task scheduling in heterogeneous parallel and distributed computing environment is a challenging problem. Applications identified by parallel tasks can be represented by directed-acyclic graphs (DAGs). Scheduling refers to the assignment of these parallel tasks on a set of bounded heterogeneous processors connected by high speed networks. Since task assignment is an NP-complete problem, instead of finding an exact solution, scheduling algorithms are developed based on heuristics, with the primary goal of minimizing the overall execution time of the application or schedule length. In this paper, the overall execution time (schedule length) of the tasks is reduced using task duplication on top of the Critical-Path-On-a-Processor (CPOP) algorithm.


2014 ◽  
Vol 607 ◽  
pp. 872-876 ◽  
Author(s):  
Xiao Guang Ren

Computational Fluid Dynamics (CFD) is widely applied for the simulation of fluid flows, and the performance of the simulation process is critical for the simulation efficiency. In this paper, we analyze the performance of CFD simulation application with profiling technology, which gets the portions of the main parts’ execution time. Through the experiment, we find that the PISO algorithm has a significant impact on the CFD simulation performance, which account for more than 90% of the total execution time. The matrix operations are also account for more than 60% of the total execution time, which provides opportunity for performance optimization.


2014 ◽  
Vol 571-572 ◽  
pp. 17-21
Author(s):  
Rong Huang ◽  
An Ping Xiong ◽  
Yang Zou

MapReduce is one of the core framework of Hadoop, it’s computing performance has been widely concerned and researched. In heterogeneous environment, unreasonable map task assignments and inefficient resource utilization lead to multiple backup tasks and the job total execution time is poor.For these problems, this paper proposes a new map task assignment strategy, which is map task dynamic balancing strategy based on file label. The strategy marks on job according to the different types, estimates node computing capabilities and historical processing efficiency of each label task, ensures map task which was assigned can execute successfully. Experiments show that, the strategy can effectively reduce number of backup tasks in map phase, and to some extent optimize the total execution time of the job.


2019 ◽  
Vol 19 (4) ◽  
pp. 186-192
Author(s):  
A. P. Demichkovskyi

The purpose of the study was to define informative indicators of technical and tactical actions of qualified rifle shooting athletes. Materials and methods. The study involved MSU (number of athletes n = 10), CMSU (number of athletes n = 9). To solve the tasks set, the following research methods were used: analysis and generalization of scientific and methodological literature, pedagogical observation. Pedagogical observation was used to study the peculiarities of technical and tactical indicators of qualified athletes, as well as their motor abilities; methods of mathematical statistics were used to process the experimental data. Results. A detailed analysis of competitive activity made it possible to determine that the shot phases “Aiming”, “Shot execution – active shot”, “Preparation for the shot” are informative indicators of technical and tactical actions of qualified rifle shooting athletes. The study determined time parameters of the phases during competitive activity. The difference between the average indicators of the athletes with different sports qualifications is at the limit of 2.55 seconds, which suggests that the duration of the restorative processes of the shooter’s body affects the performance of each shot.  Conclusions. A detailed analysis of air rifle shooting among men during competitive activity allowed to determine the difference in technical and tactical fitness between the athletes with different sports qualifications of MSU and CMSU levels: “Aiming” – MSU 950.56 seconds, CMSU 1017.91 seconds; “Shot execution – active shot” – MSU 964.45 seconds, CMSU 952.36 seconds; “Preparation for the shot” – MSU 1678.66 seconds, CMSU 1855.19 seconds, “Total execution time” – MSU 3593.68 seconds, CMSU 3825.47 seconds.


2018 ◽  
Vol 10 (1) ◽  
Author(s):  
Aaron Kite-Powell ◽  
Michael Coletta ◽  
Jamie Smimble

Objective: The objective of this work is to describe the use and performance of the NSSP ESSENCE system by analyzing the structured query language (SQL) logs generated by users of the National Syndromic Surveillance Program’s (NSSP) Electronic Surveillance System for the Early Notification of Community-based Epidemics (ESSENCE).Introduction: As system users develop queries within ESSENCE, they step through the user-interface to select data sources and parameters needed for their query. Then they select from the available output options (e.g., time series, table builder, data details). These activities execute a SQL query on the database, the majority of which are saved in a log so that system developers can troubleshoot problems. Secondarily, these data can be used as a form of web analytics to describe user query choices, query volume, query execution time, and develop an understanding of ESSENCE query patterns.Methods: ESSENCE SQL query logs were extracted from April 1, 2016 to August 23th, 2017. Overall query volume was assessed by summarizing volume of queries over time (e.g., by hour, day, and week), and by Site. To better understand system performance the mean, median, and maximum query execution times were summarized over time and by Site. SQL query text was parsed so that we could isolate, 1) Syndromes queried, 2) Sub-syndromes queried, 3) Keyword categories queried, and 4) Free text query terms used. Syndromes, sub-syndromes, and keyword categories were tabulated in total and by Site. Frequencies of free text query terms were analyzed using n-grams, wordclouds, and term co-occurrence relationships. Term co-occurrence network graphs were used to visualize the structure and relationships among terms.Results: There were a total of 354,101 SQL queries generated by users of ESSENCE between April 1, 2016 and August 23rd, 2017. Over this entire time period there was a weekly mean of 4,785 SQL queries performed by users. When looking at 2017 data through August 23rd this figure increases to a mean of 7,618 SQL queries per week for 2017, and since May 2017 the mean number of SQL queries has increased to 10,485 per week. The maximum number of user generated SQL queries in a week was 29,173. The mean, median, and maximum query execution times for all data was 0.61 minutes, 0 minutes, and 365 minutes, respectively. When looking at only queries with a free text component the mean query execution time increases slightly to 0.94 minutes, though the median is still 0 minutes. The peak usage period based on number of SQL queries performed is between 12:00pm and 3:00pm EST.Conclusions: The use of NSSP ESSENCE has grown since implementation. This is the first time the ESSENCE system has been used at a National level with this volume of data, and number of users. Our focus to date has been on successfully on-boarding new Sites so that they can benefit from use of the available tools, providing trainings to new users, and optimizing ESSENCE performance. Routine analysis of the ESSENCE SQL logs can assist us in understanding how the system is being used, how well it is performing, and in evaluating our system optimization efforts.


2021 ◽  
Author(s):  
Mahboubeh Shamsi ◽  
Abdolreza Rasouli Kenari ◽  
Roghayeh Aghamohammadi

Abstract On a graph with a negative cost cycle, the shortest path is undefined, but the number of edges of the shortest negative cost cycle could be computed. It is called Negative Cost Girth (NCG). The NCG problem is applied in many optimization issues such as scheduling and model verification. The existing polynomial algorithms suffer from high computation and memory consumption. In this paper, a powerful Map-Reduce framework implemented to find the NCG of a graph. The proposed algorithm runs in O(log k) parallel time over O(n3) on each Hadoop nodes, where n; k are the size of the graph and the value of NCG, respectively. The Hadoop implementation of the algorithm shows that the total execution time is reduced by 50% compared with polynomial algorithms, especially in large networks concerning increasing the numbers of Hadoop nodes. The result proves the efficiency of the approach for solving the NCG problem to process big data in a parallel and distributed way.


Sign in / Sign up

Export Citation Format

Share Document