Hardware Acceleration of Bayesian Neural Networks Using RAM Based Linear Feedback Gaussian Random Number Generators

Author(s):  
Ruizhe Cai ◽  
Ao Ren ◽  
Luhao Wangy ◽  
Massoud Pedramy ◽  
Yanzhi Wang
2019 ◽  
Vol 24 (12) ◽  
pp. 9243-9256
Author(s):  
Jordan J. Bird ◽  
Anikó Ekárt ◽  
Diego R. Faria

Abstract In this work, we argue that the implications of pseudorandom and quantum-random number generators (PRNG and QRNG) inexplicably affect the performances and behaviours of various machine learning models that require a random input. These implications are yet to be explored in soft computing until this work. We use a CPU and a QPU to generate random numbers for multiple machine learning techniques. Random numbers are employed in the random initial weight distributions of dense and convolutional neural networks, in which results show a profound difference in learning patterns for the two. In 50 dense neural networks (25 PRNG/25 QRNG), QRNG increases over PRNG for accent classification at + 0.1%, and QRNG exceeded PRNG for mental state EEG classification by + 2.82%. In 50 convolutional neural networks (25 PRNG/25 QRNG), the MNIST and CIFAR-10 problems are benchmarked, and in MNIST the QRNG experiences a higher starting accuracy than the PRNG but ultimately only exceeds it by 0.02%. In CIFAR-10, the QRNG outperforms PRNG by + 0.92%. The n-random split of a Random Tree is enhanced towards and new Quantum Random Tree (QRT) model, which has differing classification abilities to its classical counterpart, 200 trees are trained and compared (100 PRNG/100 QRNG). Using the accent and EEG classification data sets, a QRT seemed inferior to a RT as it performed on average worse by − 0.12%. This pattern is also seen in the EEG classification problem, where a QRT performs worse than a RT by − 0.28%. Finally, the QRT is ensembled into a Quantum Random Forest (QRF), which also has a noticeable effect when compared to the standard Random Forest (RF). Ten to 100 ensembles of trees are benchmarked for the accent and EEG classification problems. In accent classification, the best RF (100 RT) outperforms the best QRF (100 QRF) by 0.14% accuracy. In EEG classification, the best RF (100 RT) outperforms the best QRF (100 QRT) by 0.08% but is extremely more complex, requiring twice the amount of trees in committee. All differences are observed to be situationally positive or negative and thus are likely data dependent in their observed functional behaviour.


2021 ◽  
Author(s):  
Kayvan Tirdad

Pseudo random number generators (PRNGs) are one of the most important components in security and cryptography applications. We propose an application of Hopfield Neural Networks (HNN) as pseudo random number generator. This research is done based on a unique property of HNN, i.e., its unpredictable behavior under certain conditions. Also, we propose an application of Fuzzy Hopfield Neural Networks (FHNN) as pseudo random number generator. We compare the main features of ideal random number generators with our proposed PRNGs. We use a battery of statistical tests developed by National Institute of Standards and Technology (NIST) to measure the performance of proposed HNN and FHNN. We also measure the performance of other standard PRNGs and compare the results with HNN and FHNN PRNG. We have shown that our proposed HNN and FHNN have good performance comparing to other PRNGs accordingly.


2021 ◽  
Author(s):  
Kayvan Tirdad

Pseudo random number generators (PRNGs) are one of the most important components in security and cryptography applications. We propose an application of Hopfield Neural Networks (HNN) as pseudo random number generator. This research is done based on a unique property of HNN, i.e., its unpredictable behavior under certain conditions. Also, we propose an application of Fuzzy Hopfield Neural Networks (FHNN) as pseudo random number generator. We compare the main features of ideal random number generators with our proposed PRNGs. We use a battery of statistical tests developed by National Institute of Standards and Technology (NIST) to measure the performance of proposed HNN and FHNN. We also measure the performance of other standard PRNGs and compare the results with HNN and FHNN PRNG. We have shown that our proposed HNN and FHNN have good performance comparing to other PRNGs accordingly.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
David H. K. Hoe ◽  
Jonathan M. Comer ◽  
Juan C. Cerda ◽  
Chris D. Martinez ◽  
Mukul V. Shirvaikar

Cellular computing represents a new paradigm for implementing high-speed massively parallel machines. Cellular automata (CA), which consist of an array of locally connected processing elements, are a basic form of a cellular-based architecture. The use of field programmable gate arrays (FPGAs) for implementing CA accelerators has shown promising results. This paper investigates the design of CA-based pseudo-random number generators (PRNGs) using an FPGA platform. To improve the quality of the random numbers that are generated, the basic CA structure is enhanced in two ways. First, the addition of a superrule to each CA cell is considered. The resulting self-programmable CA (SPCA) uses the superrule to determine when to make a dynamic rule change in each CA cell. The superrule takes its inputs from neighboring cells and can be considered itself a second CA working in parallel with the main CA. When implemented on an FPGA, the use of lookup tables in each logic cell removes any restrictions on how the super-rules should be defined. Second, a hybrid configuration is formed by combining a CA with a linear feedback shift register (LFSR). This is advantageous for FPGA designs due to the compactness of the LFSR implementations. A standard software package for statistically evaluating the quality of random number sequences known as Diehardis used to validate the results. Both the SPCA and the hybrid CA/LFSR were found to pass all the Diehardtests.


Sign in / Sign up

Export Citation Format

Share Document