uniform generation
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 3)

2020 ◽  
Vol 49 (1) ◽  
pp. 52-59
Author(s):  
Marcelo Arenas ◽  
Luis Alberto Croquevielle ◽  
Rajesh Jayaram ◽  
Cristian Riveros
Keyword(s):  

10.37236/8251 ◽  
2019 ◽  
Vol 26 (4) ◽  
Author(s):  
Pu Gao ◽  
Catherine Greenhill

Let $H_n$ be a graph on $n$ vertices and let $\overline{H_n}$ denote the complement of $H_n$. Suppose that $\Delta = \Delta(n)$ is the maximum degree of $\overline{H_n}$. We analyse three algorithms for sampling $d$-regular subgraphs ($d$-factors) of $H_n$. This is equivalent to uniformly sampling $d$-regular graphs which avoid a set $E(\overline{H_n})$ of forbidden edges. Here $d=d(n)$ is a positive integer which may depend on $n$. Two of these algorithms produce a uniformly random $d$-factor of $H_n$ in expected runtime which is linear in $n$ and low-degree polynomial in $d$ and $\Delta$. The first algorithm applies when $(d+\Delta)d\Delta = o(n)$. This improves on an earlier algorithm by the first author, which required constant $d$ and at most a linear number of edges in $\overline{H_n}$. The second algorithm applies when $H_n$ is regular and $d^2+\Delta^2 = o(n)$, adapting an approach developed by the first author together with Wormald. The third algorithm is a simplification of the second, and produces an approximately uniform $d$-factor of $H_n$ in time $O(dn)$.  Here the output distribution differs from uniform by $o(1)$ in total variation distance, provided that $d^2+\Delta^2 = o(n)$.


2019 ◽  
Vol 199 ◽  
pp. 439-450 ◽  
Author(s):  
Yuexiao Song ◽  
Feng Xin ◽  
Gezi Guangyong ◽  
Shuo Lou ◽  
Chen Cao ◽  
...  
Keyword(s):  

10.29007/h4p9 ◽  
2018 ◽  
Author(s):  
Shubham Sharma ◽  
Rahul Gupta ◽  
Subhajit Roy ◽  
Kuldeep S. Meel

Uniform sampling has drawn diverse applications in programming languages and software engineering, like in constrained-random verification (CRV), constrained-fuzzing and bug synthesis. The effectiveness of these applications depend on the uniformity of test stimuli generated from a given set of constraints. Despite significant progress over the past few years, the performance of the state of the art techniques still falls short of those of heuristic methods employed in the industry which sacrifice either uniformity or scalability when generating stimuli.In this paper, we propose a new approach to the uniform generation that builds on recent progress in knowledge compilation. The primary contribution of this paper is marrying knowledge compilation with uniform sampling: our algorithm, KUS, employs the state-of-the-art knowledge compilers to first compile constraints into d-DNNF form, and then, generates samples by making two passes over the compiled representation.We show that KUS is able to significantly outperform existing state-of-the-art algorithms, SPUR and UniGen2, by up to 3 orders of magnitude in terms of runtime while achieving a geometric speedup of 1.7× and 8.3× over SPUR and UniGen2 respectively. Also, KUS achieves a lower PAR-21 score, around 0.82× that of SPUR and 0.38× that of UniGen2. Furthermore, KUS achieves speedups of up to 3 orders of magnitude for incremental sampling. The distribution generated by KUS is statistically indistinguishable from that generated by an ideal uniform sampler. Moreover, KUS is almost oblivious to the number of samples requested.


2018 ◽  
Vol 10 (34) ◽  
pp. 28702-28708 ◽  
Author(s):  
Mohamad El-Roz ◽  
Igor Telegeiev ◽  
Natalia E. Mordvinova ◽  
Oleg I. Lebedev ◽  
Nicolas Barrier ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document