priority algorithms
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 4)

H-INDEX

8
(FIVE YEARS 2)

In the Soft Real-Time System scheduling process with the processor is a critical task. The system schedules the processes on a processor in a time interval, and hence the processes get chance to executes on the processor. Priority-driven scheduling algorithms are sub-categorized into mainly two categories called Static Priority and Dynamic Priority Scheduler. Critical Analysis of more static and dynamic priority scheduling algorithms have been discussed in this paper. This paper has covered the static priority algorithms like Rate Monotonic (RM) and Shortest Job First (SJF) and the dynamic priority algorithms like Earliest Deadline First (EDF) and Least Slack Time First (LST). These all algorithms have been analyzed with preemptive process set and this paper has considered all the process set are periodic. This paper has also proposed a hybrid approach for efficient scheduling. In a critical analysis, it has been observed that while scheduling in underload situation dynamic priority algorithms perform well and even EDF also make sure that all process will meet their deadline. However, in an overload situation, the performance of dynamic priority algorithms reduce quickly, and most of the task will miss its deadline, whereas static priority scheduling algorithms miss a few deadlines, even it is possible to schedule all processes in underload situation, whereas in an overload situation, the static algorithms perform well compared to the dynamic scheduler. This paper is proposing one Hybrid algorithm call S_LST which uses the concept of LST and SJF scheduling algorithm. This algorithm has been applied to the periodic task set, and observations are registered. We have observed the Success Ratio (SR) & Effective CPU Utilization (ECU) and compared all algorithms in the same conditions. It is noted that instead of using LST and SJF as an independent algorithm, Hybrid algorithm S_LST performs well in underload and overload scenario. Practical investigations have been led on a huge dataset. Data Set consists of the 7000+ process set, and each process set has one to nine processes and load varies between 0.5 to 5. It has been tried on 500-time unit to approve the rightness everything being equal.


2019 ◽  
Vol 64 (4) ◽  
pp. 593-625
Author(s):  
Allan Borodin ◽  
Joan Boyar ◽  
Kim S. Larsen ◽  
Denis Pankratov

2019 ◽  
Vol 24 (6) ◽  
pp. 1835-1847 ◽  
Author(s):  
George Fernandez Savari ◽  
Vijayakumar Krishnasamy ◽  
Vidyasagar Sugavanam ◽  
Kalyanasundaram Vakesan

Author(s):  
DADMEHR RAHBARI ◽  
MOHSEN NICKRAY

In today’s world, the internet of things (IoT) is developing rapidly. Wireless sensor network (WSN) as an infrastructure of IoT has limitations in the processing power, storage, and delay for data transfer to cloud. The large volume of generated data and their transmission between WSNs and cloud are serious challenges. Fog computing (FC) as an extension of cloud to the edge of the network reduces latency and traffic; thus, it is very useful in IoT applications such as healthcare applications, wearables, intelligent transportation systems, and smart cities. Resource allocation and task scheduling are the NP-hard issues in FC. Each application includes several modules that require resources to run. Fog devices (FDs) have the ability to run resource management algorithms because of their proximity to sensors and cloud as well as the proper processing power. In this paper, we review the scheduling strategies and parameters as well as providing a greedy knapsack-based scheduling (GKS) algorithm for allocating resources appropriately to modules in fog network. Our proposed method was simulated in iFogsim as a standard simulator for FC. The results show that the energy consumption, execution cost, and sensor lifetime in GKS are better than those of the first-come-first-served (FCFS), concurrent, and delay-priority algorithms.


2018 ◽  
Vol 62 ◽  
pp. 459-488
Author(s):  
Dimitris Fotakis ◽  
Piotr Krysta ◽  
Carmine Ventre

Greedy algorithms are known to provide, in polynomial time, near optimal approximation guarantees for Combinatorial Auctions (CAs) with multidimensional bidders. It is known that truthful greedy-like mechanisms for CAs with multi-minded bidders do not achieve good approximation guarantees. In this work, we seek a deeper understanding of greedy mechanism design and investigate under which general assumptions, we can have efficient and truthful greedy mechanisms for CAs. Towards this goal, we use the framework of priority algorithms and weak and strong verification, where the bidders are not allowed to overbid on their winning set or on any subset of this set, respectively. We provide a complete characterization of the power of weak verification showing that it is sufficient and necessary for any greedy fixed priority algorithm to become truthful with the use of money or not, depending on the ordering of the bids. Moreover, we show that strong verification is sufficient and necessary to obtain a 2-approximate truthful mechanism with money, based on a known greedy algorithm, for the problem of submodular CAs in finite bidding domains. Our proof is based on an interesting structural analysis of the strongly connected components of the declaration graph.


Author(s):  
Allan Borodin ◽  
Joan Boyar ◽  
Kim S. Larsen ◽  
Denis Pankratov

2014 ◽  
Vol 536-537 ◽  
pp. 566-569
Author(s):  
Feng Xiang Zhang

This paper focus on two level hierarchical scheduling where several real-time applications are scheduled by the fixed priority algorithms. The application with its real-time tasks is bound to a server which can be modeled as a sporadic task with special care for the schedulability analysis. Different scheduling policies and servers can be applied for hierarchical fixed priority systems, this paper gives a closer review of schedulability analysis for applications and tasks when the global and local schedulers of a system are fixed priority.


2014 ◽  
Vol 32 (3_suppl) ◽  
pp. 66-66 ◽  
Author(s):  
Daniel Virgil Thomas Catenacci ◽  
Blase N. Polite ◽  
Les Henderson ◽  
Peng Xu ◽  
Brittany Rambo ◽  
...  

66 Background: GEC is the second highest cause of cancer mortality worldwide. The promise of ‘personalized’ cancer care with therapies toward specific molecular aberrations has potential to improve outcomes. However, there is recognized molecular heterogeneity within GEC (inter-patient heterogeneity), and within an individual (intra-patient heterogeneity) through space (primary tumor to metastatsis) and time (resistance to treatment) - a hurdle to advancing GEC treatment. Current trial design paradigms are challenged by heterogeneity, as they are unable to test targeted therapeutics against low frequency genomic aberrations with adequate power. Accrual difficulties to GEC trials are exacerbated by low frequencies of molecular ‘oncogenic drivers.’ Oncogenic drivers of GEC including MET and others have even less frequent genomic activation than HER2. To address this challenge, there is need for novel clinical trial designs/strategies implementing novel technologies to account for inter-patient molecular diversity and scarce tissue for analysis. Importantly, there is also need for predefined treatment priority algorithms given multiple aberrations observed within any one individual. Finally, access to multiple therapeutic agents are required to be available for treatment. Intra-patient heterogeneity may be addressed by post-treatment biopsy. Methods: We present a novel trial design 'Personalized Anti-Neoplastics for Gastro-Esophageal Adenocarcinoma' for metastatic GEC, integrating medium throughput proteomic/genomic assays with a practical biomarker assessment/treatment algorithm. Analysis of 50 GEC patients was performed to determine feasibility/timing of testing and treatment assignment into 5 major molecular categories. Results: 50 GEC tumors had biomarker assessment and mock treatment assignment within 60 days, revealing HER2 (26%), MET (30%), FGFR2 (8%), EGFR (20%), KRAS/PI3K (26%). Conclusions: Comprehensive molecular profiling of FFPE tissue was feasible and timely. Tumors were classified into major molecular subgoups. PANGEA is a compromise between the number of potential treatment categories and feasibility of conducting such a trial.


2012 ◽  
Vol 22 (4) ◽  
pp. 417-425 ◽  
Author(s):  
Jolanta Krystek ◽  
Marek Kozik

This paper presents a generalized job-shop problem taking into consideration transport time between workstations and setups machines in deadlock-free operating conditions. The automated transportation system, employing a number of automated guided vehicles is considered. The completion time of all jobs was applied as the optimization criterion. The created computational application was used to solve this problem in which chosen priority algorithms (FIFO, LIFO, LPT, SPT, EDD and LWR) were implemented. Various criteria were used to assess the quality of created schedules. Numerical results of the comparative research were presented for various criteria and rules of the priority


2010 ◽  
Vol 411 (26-28) ◽  
pp. 2542-2558 ◽  
Author(s):  
Spyros Angelopoulos ◽  
Allan Borodin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document