scholarly journals Important Milestones in the Study of Neural Networks with Random Weights

Author(s):  
Juergen Brauer

Neural networks with partially random weights are currently not really an independent field of research. However, the first works on random neural networks date back to the 1990s and in the last three decades there have been important new works in which random weights have been used and which are promising in that they give surprisingly good results when compared to approaches in which all weights are trained. These works, however, come from very different subareas of neural networks: Random Feedforward Neural Networks, Random Recurrent Neural Networks and Random ConvNets. In this paper, I analyze the most important works from these three areas and thereby follow a chronological order. I also work out the core result of each work. As a result, the reader can get a quick overview of this field of research.<br>

2021 ◽  
Author(s):  
Juergen Brauer

Neural networks with partially random weights are currently not really an independent field of research. However, the first works on random neural networks date back to the 1990s and in the last three decades there have been important new works in which random weights have been used and which are promising in that they give surprisingly good results when compared to approaches in which all weights are trained. These works, however, come from very different subareas of neural networks: Random Feedforward Neural Networks, Random Recurrent Neural Networks and Random ConvNets. In this paper, I analyze the most important works from these three areas and thereby follow a chronological order. I also work out the core result of each work. As a result, the reader can get a quick overview of this field of research.<br>


Author(s):  
PETER STUBBERUD

Unlike feedforward neural networks (FFNN) which can act as universal function approximators, recursive, or recurrent, neural networks can act as universal approximators for multi-valued functions. In this paper, a real time recursive backpropagation (RTRBP) algorithm in a vector matrix form is developed for a two-layer globally recursive neural network that has multiple delays in its feedback path. This algorithm has been evaluated on two GRNNs that approximate both an analytic and nonanalytic periodic multi-valued function that a feedforward neural network is not capable of approximating.


Artificial Intelligence has been showing monumental growth in filling the gap between the capabilities of humans and machines. Researchers and scientists work on many aspects to make new things happen. Computer Vision is one of them. To make the system to visualize, neural networks are used. Some of the well-known Neural Networks include CNN, Feedforward Neural Networks (FNN), and Recurrent Neural Networks (RNN) and so on. Among them, CNN is the correct choice for computer vision because they learn relevant features from an image or video similar to the human brain. In this paper, the dataset used is CIFAR-10 (Canadian Institute for Advanced Research) which contains 60,000 images in the size of 32x32. Those images are divided into 10 different classes which contains both training and testing images. The training images are 50,000 and testing images are 10,000. The ten different classes contain airplanes, automobiles, birds, cat, ship, truck, deer, dog, frog and horse images. This paper was mainly concentrated on improving performance using normalization layers and comparing the accuracy achieved using different activation functions like ReLU and Tanh.


1999 ◽  
Vol 121 (4) ◽  
pp. 724-729 ◽  
Author(s):  
C. James Li ◽  
Yimin Fan

This paper describes a method to diagnose the most frequent faults of a screw compressor and assess magnitude of these faults by tracking changes in compressor’s dynamics. To determine the condition of the compressor, a feedforward neural network model is first employed to identify the dynamics of the compressor. A recurrent neural network is then used to classify the model into one of the three conditions including baseline, gaterotor wear and excessive friction. Finally, another recurrent neural network estimates the magnitude of a fault from the model. The method’s ability to generalize was evaluated. Experimental validation of the method was also performed. The results show significant improvement over the previous method which used only feedforward neural networks.


Author(s):  
Xiaopeng Li ◽  
Zhourong Chen ◽  
Nevin L. Zhang

Sparse connectivity is an important factor behind the success of convolutional neural networks and recurrent neural networks. In this paper, we consider the problem of learning sparse connectivity for feedforward neural networks (FNNs). The key idea is that a unit should be connected to a small number of units at the next level below that are strongly correlated. We use Chow-Liu's algorithm to learn a tree-structured probabilistic model for the units at the current level, use the tree to identify subsets of units that are strongly correlated, and introduce a new unit with receptive field over the subsets. The procedure is repeated on the new units to build multiple layers of hidden units. The resulting model is called a TRF-net. Empirical results show that, when compared to dense FNNs, TRF-net achieves better or comparable classification performance with much fewer parameters and sparser structures. They are also more interpretable.


1998 ◽  
Vol 10 (1) ◽  
pp. 165-188 ◽  
Author(s):  
Andrew D. Back ◽  
Ah Chung Tsoi

The problem of high sensitivity in modeling is well known. Small perturbations in the model parameters may result in large, undesired changes in the model behavior. A number of authors have considered the issue of sensitivity in feedforward neural networks from a probabilistic perspective. Less attention has been given to such issues in recurrent neural networks. In this article, we present a new recurrent neural network architecture, that is capable of significantly improved parameter sensitivity properties compared to existing recurrent neural networks. The new recurrent neural network generalizes previous architectures by employing alternative discrete-time operators in place of the shift operator normally used. An analysis of the model demonstrates the existence of parameter sensitivity in recurrent neural networks and supports the proposed architecture. The new architecture performs significantly better than previous recurrent neural networks, as shown by a series of simple numerical experiments.


2021 ◽  
Vol 118 (15) ◽  
pp. e2021852118
Author(s):  
Gokce Sarar ◽  
Bhaskar Rao ◽  
Thomas Liu

Although individual subjects can be identified with high accuracy using correlation matrices computed from resting-state functional MRI (rsfMRI) data, the performance significantly degrades as the scan duration is decreased. Recurrent neural networks can achieve high accuracy with short-duration (72 s) data segments but are designed to use temporal features not present in the correlation matrices. Here we show that shallow feedforward neural networks that rely solely on the information in rsfMRI correlation matrices can achieve state-of-the-art identification accuracies (≥99.5%) with data segments as short as 20 s and across a range of input data size combinations when the total number of data points (number of regions × number of time points) is on the order of 10,000.


Sign in / Sign up

Export Citation Format

Share Document