random generators
Recently Published Documents


TOTAL DOCUMENTS

87
(FIVE YEARS 17)

H-INDEX

11
(FIVE YEARS 2)

Author(s):  
Lutz Kämmerer ◽  
Felix Krahmer ◽  
Toni Volkmer

AbstractIn this paper, a sublinear time algorithm is presented for the reconstruction of functions that can be represented by just few out of a potentially large candidate set of Fourier basis functions in high spatial dimensions, a so-called high-dimensional sparse fast Fourier transform. In contrast to many other such algorithms, our method works for arbitrary candidate sets and does not make additional structural assumptions on the candidate set. Our transform significantly improves upon the other approaches available for such a general framework in terms of the scaling of the sample complexity. Our algorithm is based on sampling the function along multiple rank-1 lattices with random generators. Combined with a dimension-incremental approach, our method yields a sparse Fourier transform whose computational complexity only grows mildly in the dimension and can hence be efficiently computed even in high dimensions. Our theoretical analysis establishes that any Fourier s-sparse function can be accurately reconstructed with high probability. This guarantee is complemented by several numerical tests demonstrating the high efficiency and versatile applicability for the exactly sparse case and also for the compressible case.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alice Wong ◽  
Garance Merholz ◽  
Uri Maoz

AbstractThe human ability for random-sequence generation (RSG) is limited but improves in a competitive game environment with feedback. However, it remains unclear how random people can be during games and whether RSG during games can improve when explicitly informing people that they must be as random as possible to win the game. Nor is it known whether any such improvement in RSG transfers outside the game environment. To investigate this, we designed a pre/post intervention paradigm around a Rock-Paper-Scissors game followed by a questionnaire. During the game, we manipulated participants’ level of awareness of the computer’s strategy; they were either (a) not informed of the computer’s algorithm or (b) explicitly informed that the computer used patterns in their choice history against them, so they must be maximally random to win. Using a compressibility metric of randomness, our results demonstrate that human RSG can reach levels statistically indistinguishable from computer pseudo-random generators in a competitive-game setting. However, our results also suggest that human RSG cannot be further improved by explicitly informing participants that they need to be random to win. In addition, the higher RSG in the game setting does not transfer outside the game environment. Furthermore, we found that the underrepresentation of long repetitions of the same entry in the series explains up to 29% of the variability in human RSG, and we discuss what might make up the variance left unexplained.


2021 ◽  
Author(s):  
Alice Wong ◽  
Lena Garance ◽  
Uri Maoz

The human ability for random-sequence generation (RSG) is limited but improves in a competitive game environment with feedback. However, it remains unclear how random people can be during games and whether RSG during games can improve when explicitly informing people that they must be as random as possible to win the game. Nor is it known whether any such improvement in RSG transfers outside the game environment. To investigate this, we designed a pre/post intervention paradigm around a Rock-Paper-Scissors game followed by a questionnaire. During the game, we manipulated participants’ level of awareness of the computer’s strategy; they were either (a) not informed of the computer’s algorithm or (b) explicitly informed that the computer used patterns in their choice history against them, so they must be maximally random to win. Using a compressibility metric of randomness, our results demonstrate that human RSG can reach levels statistically indistinguishable from computer pseudo- random generators in a competitive-game setting. However, our results also suggest that human RSG cannot be further improved by explicitly informing participants that they need to be random to win. In addition, the higher RSG in the game setting does not transfer outside the game environment. Furthermore, we found that the underrepresentation of long repetitions of the same entry in the series explains up to 29% of the variability in human RSG, and we discuss what might make up the variance left unexplained.


2021 ◽  
Vol 104 (3) ◽  
Author(s):  
W. Tarnowski ◽  
I. Yusipov ◽  
T. Laptyeva ◽  
S. Denisov ◽  
D. Chruściński ◽  
...  

Author(s):  
Sean Eberhard ◽  
Urban Jezernik

AbstractLet $$G = {\text {SCl}}_n(q)$$ G = SCl n ( q ) be a quasisimple classical group with n large, and let $$x_1, \ldots , x_k \in G$$ x 1 , … , x k ∈ G be random, where $$k \ge q^C$$ k ≥ q C . We show that the diameter of the resulting Cayley graph is bounded by $$q^2 n^{O(1)}$$ q 2 n O ( 1 ) with probability $$1 - o(1)$$ 1 - o ( 1 ) . In the particular case $$G = {\text {SL}}_n(p)$$ G = SL n ( p ) with p a prime of bounded size, we show that the same holds for $$k = 3$$ k = 3 .


2021 ◽  
Vol 104 (1) ◽  
pp. 727-737
Author(s):  
Aleksandra V. Tutueva ◽  
Timur I. Karimov ◽  
Lazaros Moysis ◽  
Erivelton G. Nepomuceno ◽  
Christos Volos ◽  
...  

2021 ◽  
Author(s):  
Sook Mun Wong ◽  
Garance Merholz ◽  
Uri Maoz

Abstract The human ability for random-sequence generation (RSG) is limited but improves in a competitive game environment with feedback. However, it remains unclear whether RSG during games can improve when explicitly informing people that they must be as random as possible to win the game nor is it known whether any such improvement transfers to outside the game environment. To investigate this, we designed a pre/post intervention paradigm around a rock-paper-scissors game followed by a questionnaire. During the game we manipulated participants’ level of awareness of the computer’s strategy; they were either (a) not informed of the computer’s algorithm or (b) explicitly informed that the computer used patterns in their choice history to beat them, so they must be maximally random to win. Using a novel comparison metric, our results demonstrate that human RSG can reach levels statistically indistinguishable from computer pseudo-random generators in a competitive-game setting. However, our results also suggest that human RSG cannot be further improved by explicitly informing participants that they need to be random to win. In addition, the higher RSG in the game setting does not transfer outside the game environment. Furthermore, we found that the underrepresentation of long repetitions of short patterns explains about a third of the variability in human RSG and discuss what might make up the remaining two thirds of RSG variability. Finally, we discuss our results in the context of the “Network-Modulation Model” and we ponder their potential relation to findings in the neuroscience of volition.


Author(s):  
Harrison Goldstein ◽  
John Hughes ◽  
Leonidas Lampropoulos ◽  
Benjamin C. Pierce

AbstractProperty-based testing uses randomly generated inputs to validate high-level program specifications. It can be shockingly effective at finding bugs, but it often requires generating a very large number of inputs to do so. In this paper, we apply ideas from combinatorial testing, a powerful and widely studied testing methodology, to modify the distributions of our random generators so as to find bugs with fewer tests. The key concept is combinatorial coverage, which measures the degree to which a given set of tests exercises every possible choice of values for every small combination of input features.In its “classical” form, combinatorial coverage only applies to programs whose inputs have a very particular shape—essentially, a Cartesian product of finite sets. We generalize combinatorial coverage to the richer world of algebraic data types by formalizing a class of sparse test descriptions based on regular tree expressions. This new definition of coverage inspires a novel combinatorial thinning algorithm for improving the coverage of random test generators, requiring many fewer tests to catch bugs. We evaluate this algorithm on two case studies, a typed evaluator for System F terms and a Haskell compiler, showing significant improvements in both.


Nanophotonics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 457-464
Author(s):  
Andrea Fratalocchi ◽  
Adam Fleming ◽  
Claudio Conti ◽  
Andrea Di Falco

AbstractPhysical unclonable functions (PUFs) are complex physical objects that aim at overcoming the vulnerabilities of traditional cryptographic keys, promising a robust class of security primitives for different applications. Optical PUFs present advantages over traditional electronic realizations, namely, a stronger unclonability, but suffer from problems of reliability and weak unpredictability of the key. We here develop a two-step PUF generation strategy based on deep learning, which associates reliable keys verified against the National Institute of Standards and Technology (NIST) certification standards of true random generators for cryptography. The idea explored in this work is to decouple the design of the PUFs from the key generation and train a neural architecture to learn the mapping algorithm between the key and the PUF. We report experimental results with all-optical PUFs realized in silica aerogels and analyzed a population of 100 generated keys, each of 10,000 bit length. The key generated passed all tests required by the NIST standard, with proportion outcomes well beyond the NIST’s recommended threshold. The two-step key generation strategy studied in this work can be generalized to any PUF based on either optical or electronic implementations. It can help the design of robust PUFs for both secure authentications and encrypted communications.


Author(s):  
Charles Bouillaguet ◽  
Florette Martinez ◽  
Julia Sauvage

The Permuted Congruential Generators (PCG) are popular conventional (non-cryptographic) pseudo-random generators designed in 2014. They are used by default in the NumPy scientific computing package. Even though they are not of cryptographic strength, their designer stated that predicting their output should nevertheless be "challenging".In this article, we present a practical algorithm that recovers all the hidden parameters and reconstructs the successive internal states of the generator. This enables us to predict the next “random” numbers, and output the seeds of the generator. We have successfully executed the reconstruction algorithm using 512 bytes of challenge input; in the worst case, the process takes 20 000 CPU hours.This reconstruction algorithm makes use of cryptanalytic techniques, both symmetric and lattice-based. In particular, the most computationally expensive part is a guessand-determine procedure that solves about 252 instances of the Closest Vector Problem on a very small lattice.


Sign in / Sign up

Export Citation Format

Share Document