scholarly journals VLSI Architecture for Designing a True Random Number Generator with Modified Parallel Run Length Encoding

Author(s):  
Jayaram. S ◽  
G. Manavaalan ◽  
S. Gunasekaran

The secured communication is a means to provide privacy and security for the data being transmitted. The cryptographic system has thus become a vital and inevitable platform for achieving data security in our day to day life ranging from the generation of one time passwords, session keys, signature parameters, ephemeral keys. The encryption level is entirely dependent on the unpredictability of the digital bit streams. The paper focuses on generating true random number sequences using hardware, so as to safeguard the encryption keys patterns for digital communications. These sequences are generated using purely digital components supported by an efficient VLSI architecture. The implementation of the proposed model is done using Mojo-V3, supported by Xilinx ISE software platform. The generated random sequences will further undergo some post processing operations, viz; Von-Neumann correction (VNC) and Parallel Run Length Encoding (PRLE), to eliminate the bias in bit stream and also to compensate the high power dissipation respectively.

2019 ◽  
Vol 2019 ◽  
pp. 1-11
Author(s):  
Hojoong Park ◽  
Yongjin Yeom ◽  
Ju-Sung Kang

We propose a new lightweight BCH code corrector of the random number generator such that the bitwise dependence of the output value is controllable. The proposed corrector is applicable to a lightweight environment and the degree of dependence among the output bits of the corrector is adjustable depending on the bias of the input bits. Hitherto, most correctors using a linear code are studied on the direction of reducing the bias among the output bits, where the biased input bits are independent. On the other hand, the output bits of a linear code corrector are inherently not independent even though the input bits are independent. However, there are no results dealing with the independence of the output bits. The well-known von Neumann corrector has an inefficient compression rate and the length of output bits is nondeterministic. Since the heavy cryptographic algorithms are used in the NIST’s conditioning component to reduce the bias of input bits, it is not appropriate in a lightweight environment. Thus we have concentrated on the linear code corrector and obtained the lightweight BCH code corrector with measurable dependence among the output bits as well as the bias. Moreover, we provide some simulations to examine our results.


2015 ◽  
Vol 61 (2) ◽  
pp. 199-204 ◽  
Author(s):  
Szymon Łoza ◽  
Łukasz Matuszewski ◽  
Mieczysław Jessa

Abstract Today, cryptographic security depends primarily on having strong keys and keeping them secret. The keys should be produced by a reliable and robust to external manipulations generators of random numbers. To hamper different attacks, the generators should be implemented in the same chip as a cryptographic system using random numbers. It forces a designer to create a random number generator purely digitally. Unfortunately, the obtained sequences are biased and do not pass many statistical tests. Therefore an output of the random number generator has to be subjected to a transformation called post-processing. In this paper the hash function SHA-256 as post-processing of bits produced by a combined random bit generator using jitter observed in ring oscillators (ROs) is proposed. All components – the random number generator and the SHA-256, are implemented in a single Field Programmable Gate Array (FPGA). We expect that the proposed solution, implemented in the same FPGA together with a cryptographic system, is more attack-resistant owing to many sources of randomness with significantly different nominal frequencies.


1992 ◽  
Vol 2 (2) ◽  
pp. 203-212 ◽  
Author(s):  
F. Warren Burton ◽  
Rex L. Page

AbstractIn a functional program, a simple random number generator may generate a lazy list of random numbers. This is fine when the random numbers are consumed sequentially at a single point in the program. However, things are more complicated in a program where random numbers are used at many locations, such as in a large simulation. The programmer should not need to worry about providing separate generators with a unique seed at each point where random numbers are used. At the same time, the programmer should not need to coordinate the use of a single stream of random numbers in many parts of the program, which can be particularly difficult with lazy evaluation or parallel processing.We discuss several techniques for distributing random numbers to various parts of a program, and some methods of allowing different program components to evaluate random numbers locally. We then propose a new approach in which a random number sequence can be split at a random point to produce a pair of random number sequences that can be used independently at different points in the computation.The approach can also be used in distributed procedural programs, where it is desirable to avoid dealing with a single source of random numbers. The approach has the added advantage of producing repeatable results, as might be needed in debugging, for example.


The fourth chapter deals with the use of asynchronous cellular automata for constructing high-quality pseudo-random number generators. A model of such a generator is proposed. Asynchronous cellular automata are constructed using the neighborhood of von Neumann and Moore. Each cell of such an asynchronous cellular state can be in two states (information and active states). There is only one active cell at each time step in an asynchronous cellular automaton. The cell performs local functions only when it is active. At each time step, the active cell transmits its active state to one of the neighborhood cells. An algorithm for the operation of a pseudo-random number generator based on an asynchronous cellular automaton is described, as well as an algorithm for working a cell. The hardware implementation of such a generator is proposed. Several variants of cell construction are considered.


2015 ◽  
Vol 25 (13) ◽  
pp. 1550188 ◽  
Author(s):  
Yuansheng Liu ◽  
Hua Fan ◽  
Eric Yong Xie ◽  
Ge Cheng ◽  
Chengqing Li

Since John von Neumann suggested utilizing Logistic map as a random number generator in 1947, a great number of encryption schemes based on Logistic map and/or its variants have been proposed. This paper re-evaluates the security of an image cipher based on transformed logistic maps and proves that the image cipher can be deciphered efficiently under two different conditions: (1) two pairs of known plain-images and the corresponding cipher-images with computational complexity of [Formula: see text]; (2) two pairs of chosen plain-images and the corresponding cipher-images with computational complexity of [Formula: see text], where [Formula: see text] is the number of pixels in the plain-image. In contrast, the required condition in the previous deciphering method is 87 pairs of chosen plain-images and the corresponding cipher-images with computational complexity of [Formula: see text]. In addition, three other security flaws existing in most Logistic-map-based ciphers are also reported.


Author(s):  
ALASTAIR A. ABBOTT ◽  
CRISTIAN S. CALUDE ◽  
KARL SVOZIL

In this paper we propose a quantum random number generator (QRNG) that uses an entangled photon pair in a Bell singlet state and is certified explicitly by value indefiniteness. While ‘true randomness’ is a mathematical impossibility, the certification by value indefiniteness ensures that the quantum random bits are incomputable in the strongest sense. This is the first QRNG setup in which a physical principle (Kochen–Specker value indefiniteness) guarantees that no single quantum bit that is produced can be classically computed (reproduced and validated), which is the mathematical form of bitwise physical unpredictability.We discuss the effects of various experimental imperfections in detail: in particular, those related to detector efficiencies, context alignment and temporal correlations between bits. The analysis is very relevant for the construction of any QRNG based on beam-splitters. By measuring the two entangled photons in maximally misaligned contexts and using the fact that two bitstrings, rather than just one, are obtained, more efficient and robust unbiasing techniques can be applied. We propose a robust and efficient procedure based onXORing the bitstrings together – essentially using one as a one-time-pad for the other – to extract random bits in the presence of experimental imperfections, as well as a more efficient modification of the von Neumann procedure for the same task. We also discuss some open problems.


Sign in / Sign up

Export Citation Format

Share Document