fair scheduler
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 16)

H-INDEX

8
(FIVE YEARS 2)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-31
Author(s):  
Lennard Gäher ◽  
Michael Sammler ◽  
Simon Spies ◽  
Ralf Jung ◽  
Hoang-Hai Dang ◽  
...  

Today’s compilers employ a variety of non-trivial optimizations to achieve good performance. One key trick compilers use to justify transformations of concurrent programs is to assume that the source program has no data races : if it does, they cause the program to have undefined behavior (UB) and give the compiler free rein. However, verifying correctness of optimizations that exploit this assumption is a non-trivial problem. In particular, prior work either has not proven that such optimizations preserve program termination (particularly non-obvious when considering optimizations that move instructions out of loop bodies), or has treated all synchronization operations as external functions (losing the ability to reorder instructions around them). In this work we present Simuliris , the first simulation technique to establish termination preservation (under a fair scheduler) for a range of concurrent program transformations that exploit UB in the source language. Simuliris is based on the idea of using ownership to reason modularly about the assumptions the compiler makes about programs with well-defined behavior. This brings the benefits of concurrent separation logics to the space of verifying program transformations: we can combine powerful reasoning techniques such as framing and coinduction to perform thread-local proofs of non-trivial concurrent program optimizations. Simuliris is built on a (non-step-indexed) variant of the Coq-based Iris framework, and is thus not tied to a particular language. In addition to demonstrating the effectiveness of Simuliris on standard compiler optimizations involving data race UB, we also instantiate it with Jung et al.’s Stacked Borrows semantics for Rust and generalize their proofs of interesting type-based aliasing optimizations to account for concurrency.


2021 ◽  
Vol 3 (3) ◽  
Author(s):  
D. Srinivasa Rao ◽  
V. Berlin Hency

AbstractIn the past two decades, 802.11 based Wireless Local Area Networks (WLANs) have gained popularity in offering Internet services and increasingly supporting data driven video streaming applications. Such services demand enhanced view quality to all the end-users in the network. However, due to the sharing of limited resources and lack of proper scheduling mechanisms, WLANs are unable to meet the requirements of numerous video users and provide them better quality. Therefore, an effective resource scheduling approach is needed that can achieve high user satisfaction for the ever changing user needs for quality video viewing. In this paper, a Quality of Experience based Cross-Layer Scheduling (QoECLS) scheme to enhance the user experience for video applications in WLANs is proposed. Here, the Mean Opinion Score has been used as a measure to evaluate the user experience. In order to provide the average user throughput with guarantee of fairness among users, the QoECLS scheme uses cross-layer information from the application and physical layer, respectively. The QoECLS algorithm allocates transmission resources by considering application feedback along with channel state information and buffer status. The performance of the QoECLS approach was extensively studied through simulations and the results showed an improvement in user experience and throughput, while maintaining fairness among the users. In terms of throughput, the proposed scheme achieves 25%, 45 %, and 50% improvement compared to QoE Aware Scheduling scheme, Modified-Largest Weighted Delay First scheduler and Exponential-Proportional Fair scheduler, respectively.


Author(s):  
Yan Huang ◽  
Shaoran Li ◽  
Y. Thomas Hou ◽  
Wenjing Lou

2020 ◽  
pp. 101847 ◽  
Author(s):  
Sanjay Moulik ◽  
Arnab Sarkar ◽  
Hemangee K. Kapoor
Keyword(s):  

Author(s):  
Patrick Metzler ◽  
Neeraj Suri ◽  
Georg Weissenbacher

Abstract Model checkers frequently fail to completely verify a concurrent program, even if partial-order reduction is applied. The verification engineer is left in doubt whether the program is safe and the effort toward verifying the program is wasted. We present a technique that uses the results of such incomplete verification attempts to construct a (fair) scheduler that allows the safe execution of the partially verified concurrent program. This scheduler restricts the execution to schedules that have been proven safe (and prevents executions that were found to be erroneous). We evaluate the performance of our technique and show how it can be improved using partial-order reduction. While constraining the scheduler results in a considerable performance penalty in general, we show that in some cases our approach—somewhat surprisingly—even leads to faster executions.


2020 ◽  
Vol 9 (1) ◽  
pp. 43-50
Author(s):  
Sidik Prabowo ◽  
Maman Abdurohman

Hadoop merupakan sebuah framework software yang bersifat open source dan berbasis java. Hadoop terdiri atas dua komponen utama, yaitu MapReduce dan Hadoop Distributed File System (HDFS). MapReduce terdiri atas Map dan Reduce yang digunakan untuk pemrosesan data, sementara HDFS adalah tempat atau direktori dimana data hadoop dapat disimpan. Dalam menjalankan job yang tidak jarang terdapat keragaman karakteristik eksekusinya, diperlukan job scheduler yang tepat.  Terdapat banyak job scheduler yang dapat di pilih supaya sesuai dengan karakteristik job. Fair Scheduler menggunakan salah satu scheduler dimana prisnsipnya memastikan suatu jobs akan mendapatkan resource yang sama dengan jobs yang lain, dengan tujuan meningkatkan performa dari segi Average Completion Time. Hadoop Fair Sojourn Protocol Scheduler adalah sebuah algoritma scheduling dalam Hadoop yang dapat melakukan scheduling berdasarkan ukuran jobs yang diberikan. Penelitian ini bertujuan untuk melihat perbandingan performa kedua scheduler tersebut untuk karakteristik data twitter. Hasil pengujian menunjukan Hadoop Fair Sojourn Protocol Scheduler memiliki performansi lebih baik dibandingkan Fair Scheduler baik dari penanganan average completion time sebesar 9,31% dan job throughput sebesar 23,46%. Kemudian untuk Fair Scheduler unggul dalam parameter task fail rate sebesar 23,98%.


Author(s):  
Augusto Souza ◽  
Islene Garcia

Disco is an open source MapReduce framework and an alternative to Hadoop. Preemption of tasks is an important feature which helps organizations relying on the MapReduce paradigm to handle their heterogeneous workload usually constituted of research (long duration and with low priority) and production (short duration and with high priority) applications. The missing preemption in Disco affects the production jobs when these two kinds of jobs need to be executed in parallel: the high priority response is delayed because there aren’t resources to compute it. In this paper we describe the implementation of the Preemptive Fair Scheduler Policy which improved largely our experimental production job execution time with a small impact on the research job.


2020 ◽  
Vol 13 (6) ◽  
pp. 2135-2144
Author(s):  
B. Satheesh Monikandan ◽  
A. Sivasubramanian ◽  
S. P. K. Babu ◽  
G. K. D. Prasanna Venkatesan ◽  
C. Arunachalaperumal

Sign in / Sign up

Export Citation Format

Share Document