Fuzzy Flip-Flop Based Neural Network as a Function Approximator

Author(s):  
Rita Lovassy ◽  
Laszlo T. Koczy ◽  
Laszlo Gal
2021 ◽  
Vol 2021 (4) ◽  
Author(s):  
Sayantan Choudhury ◽  
Ankan Dutta ◽  
Debisree Ray

Abstract In this work, our prime objective is to study the phenomena of quantum chaos and complexity in the machine learning dynamics of Quantum Neural Network (QNN). A Parameterized Quantum Circuits (PQCs) in the hybrid quantum-classical framework is introduced as a universal function approximator to perform optimization with Stochastic Gradient Descent (SGD). We employ a statistical and differential geometric approach to study the learning theory of QNN. The evolution of parametrized unitary operators is correlated with the trajectory of parameters in the Diffusion metric. We establish the parametrized version of Quantum Complexity and Quantum Chaos in terms of physically relevant quantities, which are not only essential in determining the stability, but also essential in providing a very significant lower bound to the generalization capability of QNN. We explicitly prove that when the system executes limit cycles or oscillations in the phase space, the generalization capability of QNN is maximized. Finally, we have determined the generalization capability bound on the variance of parameters of the QNN in a steady state condition using Cauchy Schwartz Inequality.


2008 ◽  
Vol 9 (S1) ◽  
Author(s):  
Ikuko Nishikawa ◽  
Masayoshi Nakaumura ◽  
Yoshiki Igarashi ◽  
Tomoki Kazawa ◽  
Hidetoshi Ikeno ◽  
...  

2012 ◽  
Vol 433-440 ◽  
pp. 5647-5653 ◽  
Author(s):  
Xiao Jun Li ◽  
Lin Li

There’re many models derived from the famous bio-inspired artificial neural network (ANN). Among them, multi-layer perceptron (MLP) is widely used as a universal function approximator. With the development of EDA and recent research work, we are able to use rapid and convenient method to generate hardware implementation of MLP on FPGAs through pre-designed IP cores. In the mean time, we focus on achieving the inherent parallelism of neural networks. In this paper, we firstly propose the hardware architecture of modular IP cores. Then, a parallel MLP is devised as an example. At last, some conclusions are made.


2009 ◽  
Vol 3 ◽  
Author(s):  
Ikuko Nishikawa ◽  
Akira Takashima ◽  
Shigehiro Namiki ◽  
Tomoki Kazawa ◽  
Stephan Shuichi Haupt ◽  
...  

Author(s):  
Takaaki Kobayashi ◽  
◽  
Takeshi Shibuya ◽  
Masahiko Morita

When applying reinforcement learning (RL) algorithms such as Q-learning to real-world applications, we must consider the influence of sensor noise. The simplest way to reduce such noise influence is to additionally use other types of sensors, but this may require more state space -- and probably increase redundancy. Conventional value-function approximators used to RL in continuous state-action space do not deal appropriately with such situations. The selective desensitization neural network (SDNN) has high generalization ability and robustness against noise and redundant input. We therefore propose an SDNN-based value-function approximator for Q-learning in continuous state-action space, and evaluate its performance in terms of robustness against redundant input and sensor noise. Results show that our proposal is strongly robust against noise and redundant input and enables the agent to take better actions by using additional inputs without degrading learning efficiency. These properties are eminently advantageous in real-world applications such as in robotic systems.


Sign in / Sign up

Export Citation Format

Share Document