Vector Field Approximation by Model Inclusive Learning of Neural Networks

Author(s):  
Yasuaki Kuroe ◽  
Hajimu Kawakami
2011 ◽  
Vol 20 (05) ◽  
pp. 745-756 ◽  
Author(s):  
FRANCISCO DIEGO MAZZITELLI

We discuss the renormalization procedure for quantum scalar fields with modified dispersion relations in curved spacetimes. We consider two different ways of introducing modified dispersion relations: through the interaction with a dynamical temporal vector field, as in the context of the Einstein–Aether theory, and breaking explicitly the covariance of the theory, as in Hǒrava–Lifshitz gravity. Working in the weak field approximation, we show that the general structure of the counterterms depends on the UV behavior of the dispersion relations and on the mechanism chosen to introduce them.


2011 ◽  
Vol 2 (1) ◽  
pp. 1-16 ◽  
Author(s):  
H. E. Psillakis ◽  
M. A. Christodoulou ◽  
T. Giotis ◽  
Y. Boutalis

In this paper, a new methodology is proposed for deterministic learning with neural networks. Using an observer that employs the integral of the sign of the error term, asymptotic estimation of the respective nonlinear vector field is achieved. Patchy Neural Networks (PNNs) are introduced to identify the unknown nonlinearity from the observer’s output and the state measurements. The proposed scheme achieves learning with a single pass from the respective patches and does not need standard persistency of excitation conditions. Furthermore, the PNN weights are updated algebraically, reducing the computational load of learning significantly. Simulation results for a Duffing oscillator and a fuzzy cognitive network illustrate the effectiveness of the proposed approach.


Author(s):  
H. E. Psillakis ◽  
M. A. Christodoulou ◽  
T. Giotis ◽  
Y. Boutalis

In this paper, a new methodology is proposed for deterministic learning with neural networks. Using an observer that employs the integral of the sign of the error term, asymptotic estimation of the respective nonlinear vector field is achieved. Patchy Neural Networks (PNNs) are introduced to identify the unknown nonlinearity from the observer’s output and the state measurements. The proposed scheme achieves learning with a single pass from the respective patches and does not need standard persistency of excitation conditions. Furthermore, the PNN weights are updated algebraically, reducing the computational load of learning significantly. Simulation results for a Duffing oscillator and a fuzzy cognitive network illustrate the effectiveness of the proposed approach.


2001 ◽  
Vol 11 (01) ◽  
pp. 169-177 ◽  
Author(s):  
CHIH-WEN SHIH

This work investigates a class of lattice dynamical systems originated from cellular neural networks. In the vector field of this class, each component of the state vector and the output vector is related through a sigmoidal nonlinear output function. For two types of sigmoidal output functions, Liapunov functions have been constructed in the literatures. Complete stability has been studied for these systems using LaSalle's invariant principle on the Liapunov functions. The purpose of this presentation is two folds. The first one is to construct Liapunov functions for more general sigmoidal output functions. The second is to extend the interaction parameters into a more general class, using an approach by Fiedler and Gedeon. This presentation also emphasizes the complete stability when the equilibrium is not isolated for the standard cellular neural networks.


1992 ◽  
Vol 67 (6) ◽  
pp. 491-500 ◽  
Author(s):  
Ferdinando A. Mussa-Ivaldi ◽  
Simon F. Giszter

Sign in / Sign up

Export Citation Format

Share Document