scholarly journals On the locality of the natural gradient for learning in deep Bayesian networks

Author(s):  
Nihat Ay

AbstractWe study the natural gradient method for learning in deep Bayesian networks, including neural networks. There are two natural geometries associated with such learning systems consisting of visible and hidden units. One geometry is related to the full system, the other one to the visible sub-system. These two geometries imply different natural gradients. In a first step, we demonstrate a great simplification of the natural gradient with respect to the first geometry, due to locality properties of the Fisher information matrix. This simplification does not directly translate to a corresponding simplification with respect to the second geometry. We develop the theory for studying the relation between the two versions of the natural gradient and outline a method for the simplification of the natural gradient with respect to the second geometry based on the first one. This method suggests to incorporate a recognition model as an auxiliary model for the efficient application of the natural gradient method in deep networks.

Author(s):  
Saket Tiwari ◽  
Philip S. Thomas

The recently proposed option-critic architecture (Bacon, Harb, and Precup 2017) provides a stochastic policy gradient approach to hierarchical reinforcement learning. Specifically, it provides a way to estimate the gradient of the expected discounted return with respect to parameters that define a finite number of temporally extended actions, called options. In this paper we show how the option-critic architecture can be extended to estimate the natural gradient (Amari 1998) of the expected discounted return. To this end, the central questions that we consider in this paper are: 1) what is the definition of the natural gradient in this context, 2) what is the Fisher information matrix associated with an option’s parameterized policy, 3) what is the Fisher information matrix associated with an option’s parameterized termination function, and 4) how can a compatible function approximation approach be leveraged to obtain natural gradient estimates for both the parameterized policy and parameterized termination functions of an option with per-time-step time and space complexity linear in the total number of parameters. Based on answers to these questions we introduce the natural option critic algorithm. Experimental results showcase improvement over the vanilla gradient approach.


2011 ◽  
Vol 2011 ◽  
pp. 1-9 ◽  
Author(s):  
Michael R. Bastian ◽  
Jacob H. Gunther ◽  
Todd K. Moon

Adaptive natural gradient learning avoids singularities in the parameter space of multilayer perceptrons. However, it requires a larger number of additional parameters than ordinary backpropagation in the form of the Fisher information matrix. This paper describes a new approach to natural gradient learning that uses a smaller Fisher information matrix. It also uses a prior distribution on the neural network parameters and an annealed learning rate. While this new approach is computationally simpler, its performance is comparable to that of adaptive natural gradient learning.


2000 ◽  
Vol 12 (6) ◽  
pp. 1399-1409 ◽  
Author(s):  
Shun-ichi Amari ◽  
Hyeyoung Park ◽  
Kenji Fukumizu

The natural gradient learning method is known to have ideal performances for on-line training of multilayer perceptrons. It avoids plateaus, which give rise to slow convergence of the backpropagation method. It is Fisher efficient, whereas the conventional method is not. However, for implementing the method, it is necessary to calculate the Fisher information matrix and its inverse, which is practically very difficult. This article proposes an adaptive method of directly obtaining the inverse of the Fisher information matrix. It generalizes the adaptive Gauss-Newton algorithms and provides a solid theoretical justification of them. Simulations show that the proposed adaptive method works very well for realizing natural gradient learning.


2015 ◽  
Vol 27 (2) ◽  
pp. 481-505 ◽  
Author(s):  
Junsheng Zhao ◽  
Haikun Wei ◽  
Chi Zhang ◽  
Weiling Li ◽  
Weili Guo ◽  
...  

Radial basis function (RBF) networks are one of the most widely used models for function approximation and classification. There are many strange behaviors in the learning process of RBF networks, such as slow learning speed and the existence of the plateaus. The natural gradient learning method can overcome these disadvantages effectively. It can accelerate the dynamics of learning and avoid plateaus. In this letter, we assume that the probability density function (pdf) of the input and the activation function are gaussian. First, we introduce natural gradient learning to the RBF networks and give the explicit forms of the Fisher information matrix and its inverse. Second, since it is difficult to calculate the Fisher information matrix and its inverse when the numbers of the hidden units and the dimensions of the input are large, we introduce the adaptive method to the natural gradient learning algorithms. Finally, we give an explicit form of the adaptive natural gradient learning algorithm and compare it to the conventional gradient descent method. Simulations show that the proposed adaptive natural gradient method, which can avoid the plateaus effectively, has a good performance when RBF networks are used for nonlinear functions approximation.


Author(s):  
Yasutaka Furusho ◽  
Kazushi Ikeda

Abstract Deep neural networks (DNNs) have the same structure as the neocognitron proposed in 1979 but have much better performance, which is because DNNs include many heuristic techniques such as pre-training, dropout, skip connections, batch normalization (BN), and stochastic depth. However, the reason why these techniques improve the performance is not fully understood. Recently, two tools for theoretical analyses have been proposed. One is to evaluate the generalization gap, defined as the difference between the expected loss and empirical loss, by calculating the algorithmic stability, and the other is to evaluate the convergence rate by calculating the eigenvalues of the Fisher information matrix of DNNs. This overview paper briefly introduces the tools and shows their usefulness by showing why the skip connections and BN improve the performance.


1998 ◽  
Vol 10 (8) ◽  
pp. 2137-2157 ◽  
Author(s):  
Howard Hua Yang ◽  
Shun-ichi Amari

The natural gradient descent method is applied to train an n-m-1 multilayer perceptron. Based on an efficient scheme to represent the Fisher information matrix for an n-m-1 stochastic multilayer perceptron, a new algorithm is proposed to calculate the natural gradient without inverting the Fisher information matrix explicitly. When the input dimension n is much larger than the number of hidden neurons m, the time complexity of computing the natural gradient is O(n).


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Jing Chen ◽  
Ruifeng Ding

This paper presents two methods for dual-rate sampled-data nonlinear output-error systems. One method is the missing output estimation based stochastic gradient identification algorithm and the other method is the auxiliary model based stochastic gradient identification algorithm. Different from the polynomial transformation based identification methods, the two methods in this paper can estimate the unknown parameters directly. A numerical example is provided to confirm the effectiveness of the proposed methods.


Sign in / Sign up

Export Citation Format

Share Document