data races
Recently Published Documents


TOTAL DOCUMENTS

100
(FIVE YEARS 16)

H-INDEX

15
(FIVE YEARS 1)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-31
Author(s):  
Lennard Gäher ◽  
Michael Sammler ◽  
Simon Spies ◽  
Ralf Jung ◽  
Hoang-Hai Dang ◽  
...  

Today’s compilers employ a variety of non-trivial optimizations to achieve good performance. One key trick compilers use to justify transformations of concurrent programs is to assume that the source program has no data races : if it does, they cause the program to have undefined behavior (UB) and give the compiler free rein. However, verifying correctness of optimizations that exploit this assumption is a non-trivial problem. In particular, prior work either has not proven that such optimizations preserve program termination (particularly non-obvious when considering optimizations that move instructions out of loop bodies), or has treated all synchronization operations as external functions (losing the ability to reorder instructions around them). In this work we present Simuliris , the first simulation technique to establish termination preservation (under a fair scheduler) for a range of concurrent program transformations that exploit UB in the source language. Simuliris is based on the idea of using ownership to reason modularly about the assumptions the compiler makes about programs with well-defined behavior. This brings the benefits of concurrent separation logics to the space of verifying program transformations: we can combine powerful reasoning techniques such as framing and coinduction to perform thread-local proofs of non-trivial concurrent program optimizations. Simuliris is built on a (non-step-indexed) variant of the Coq-based Iris framework, and is thus not tied to a particular language. In addition to demonstrating the effectiveness of Simuliris on standard compiler optimizations involving data race UB, we also instantiate it with Jung et al.’s Stacked Borrows semantics for Rust and generalize their proofs of interesting type-based aliasing optimizations to account for concurrency.


Author(s):  
Bradley Swain ◽  
Bozhen Liu ◽  
Peiming Liu ◽  
Yanze Li ◽  
Addison Crump ◽  
...  

Author(s):  
Jia-Ju Bai ◽  
Qiu-Liang Chen ◽  
Zu-Ming Jiang ◽  
Julia Lawall ◽  
Shi-Min Hu

2020 ◽  
Vol 76 (4) ◽  
pp. 3155-3155
Author(s):  
Misun Yu ◽  
Yu-Seung Ma ◽  
Doo-Hwan Bae

Author(s):  
D. A. Morgunov ◽  

The article presents a new set-theoretic model and procedures that reduce the time required to detect hidden vulnerabilities in the source code of multi-threaded computer programs, as well as the results of mathematical modeling. Hidden vulnerabilities in the article are under-stood as vulnerabilities leading to data races and deadlocks, since they have a stochastic nature of manifestation during testing, which greatly complicates their identification. The presented model describes the state of each thread of a multi-threaded computer program currently exe-cuting a function and the contents of the function call stack. At the same time, it remains pos-sible to use the model in verification by the Model Checking method, and also eliminates the need to solve the problem of searching for the model invariant. The presented procedures make it possible to formulate specifications for the verification method on models, the implementa-tion of which makes it possible to identify vulnerabilities leading to data races and deadlocks in the source code of multithreaded programs


A data race is similar to any other bugs in software application. Data race will result in the execution of the program unpredictable. There are 46 documented races in Linux kernel. OpenMP is an Application programming interface for shared programming model. It is a construct based model which works on fork join parallelism. OpenMP achieved node level parallelism and can manage data in single instruction multiple data and single program multiple data parallelism by executing different constructs like work sharing and parallel constructs. In any shared programming model, variables are shared by multiple threads in the program to execute different tasks by different threads. OpenMP is used to achieve parallelism by creating shared variable environment but there are chances to have data races in OpenMP programs. In this paper we discuss different algorithms to detect data races in OpenMP programs.


2019 ◽  
Vol 9 (23) ◽  
pp. 5100
Author(s):  
Congxi Song ◽  
Xu Zhou ◽  
Qidi Yin ◽  
Xinglu He ◽  
Hangwei Zhang ◽  
...  

Fuzzing is an effective technology in software testing and security vulnerability detection. Unfortunately, fuzzing is an extremely compute-intensive job, which may cause thousands of computing hours to find a bug. Current novel works generally improve fuzzing efficiency by developing delicate algorithms. In this paper, we propose another direction of improvement in this field, i.e., leveraging parallel computing to improve fuzzing efficiency. In this way, we develop P-fuzz, a parallel fuzzing framework that can utilize massive, distributed computing resources to fuzz. P-fuzz uses a database to share the fuzzing status such as seeds, the coverage information, etc. All fuzzing nodes get tasks from the database and update their fuzzing status to the database. Also, P-fuzz handles some data races and exceptions in parallel fuzzing. We compare P-fuzz with AFL and a parallel fuzzing framework Roving in our experiment. The result shows that P-fuzz can easily speed up AFL about 2.59× and Roving about 1.66× on average by using 4 nodes.


2019 ◽  
Author(s):  
M Schordan ◽  
C Liao ◽  
P Lin ◽  
I Karlin

Sign in / Sign up

Export Citation Format

Share Document