scholarly journals Self-modeling in Hopfield Neural Networks with Continuous Activation Function

2018 ◽  
Vol 123 ◽  
pp. 573-578 ◽  
Author(s):  
Mario Zarco ◽  
Tom Froese
2020 ◽  
Vol 32 (11) ◽  
pp. 2237-2248
Author(s):  
Masaki Kobayashi

A complex-valued Hopfield neural network (CHNN) with a multistate activation function is a multistate model of neural associative memory. The weight parameters need a lot of memory resources. Twin-multistate activation functions were introduced to quaternion- and bicomplex-valued Hopfield neural networks. Since their architectures are much more complicated than that of CHNN, the architecture should be simplified. In this work, the number of weight parameters is reduced by bicomplex projection rule for CHNNs, which is given by the decomposition of bicomplex-valued Hopfield neural networks. Computer simulations support that the noise tolerance of CHNN with a bicomplex projection rule is equal to or even better than that of quaternion- and bicomplex-valued Hopfield neural networks. By computer simulations, we find that the projection rule for hyperbolic-valued Hopfield neural networks in synchronous mode maintains a high noise tolerance.


2021 ◽  
pp. 1-15
Author(s):  
Masaki Kobayashi

Abstract A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model. A quaternion-valued Hopfield neural network (QHNN) with a twin-multistate activation function was proposed to reduce the number of weight parameters of CHNN. Dual connections (DCs) are introduced to the QHNNs to improve the noise tolerance. The DCs take advantage of the noncommutativity of quaternions and consist of two weights between neurons. A QHNN with DCs provides much better noise tolerance than a CHNN. Although a CHNN and a QHNN with DCs have the samenumber of weight parameters, the storage capacity of projection rule for QHNNs with DCs is half of that for CHNNs and equals that of conventional QHNNs. The small storage capacity of QHNNs with DCs is caused by projection rule, not the architecture. In this work, the ebbian rule is introduced and proved by stochastic analysis that the storage capacity of a QHNN with DCs is 0.8 times as many as that of a CHNN.


2021 ◽  
Author(s):  
Yang Liu ◽  
Zhen Wang

Abstract This paper studies the multistability of state-dependent switched Hopfield neural networks (SSHNNs) with the Gaussian-wavelet-type activation function. The coexistence and stability of multiple equilibria of SSHNNs are proved. By using Brouwer's fixed point theorem, it is obtained that the SSHNNs can have at least 7n or 6n equilibria under a specified set of conditions. By using the strictly diagonally dominance matrix (SDDM) theorem and Lyapunov stability theorem, 4n or 5n locally stable (LS) equilibria are obtained, respectively. Compared with the conventional Hopfield neural networks (HNNs) without state-dependent switching or SSHNNs with other kinds of activation functions, SSHNNs with this type of activation functions can have more LS equilibria, which implies that SSHNNs with Gaussian-wavelet-type activation functions can have even larger storage capacity and would be more dominant in associative memory application. Last, some simulation results are given to verify the correctness of the theoretical results.


Author(s):  
K. Anitha ◽  
R. Dhanalakshmi ◽  
K. Naresh ◽  
D. Rukmani Devi

Neural networks play a significant role in data classification. Complex-valued Hopfield Neural Network (CHNN) is mostly used in various fields including the image classification. Though CHNN has proven its credibility in the classification task, it has a few issues. Activation function of complex-valued neuron maps to a unit circle in the complex plane affecting the resolution factor, flexibility and compatibility to changes, during adaptation in retrieval systems. The proposed work demonstrates Content-Based Image Retrieval System (CBIR) with Hyperbolic Hopfield Neural Networks (HHNN), an analogue of CHNN for classifying images. Activation function of the Hyperbolic neuron is not cyclic in hyperbolic plane. The images are mathematically represented and indexed using the six basic features. The proposed HHNN classifier is trained, tested and evaluated through extensive experiments considering individual features and four combined features for indexing. The obtained results prove that HHNN guides retrieval process, enhances system performance and minimizes the cost of implementing Neural Network Classifier-based image retrieval system.


2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


2019 ◽  
Vol 12 (3) ◽  
pp. 156-161 ◽  
Author(s):  
Aman Dureja ◽  
Payal Pahwa

Background: In making the deep neural network, activation functions play an important role. But the choice of activation functions also affects the network in term of optimization and to retrieve the better results. Several activation functions have been introduced in machine learning for many practical applications. But which activation function should use at hidden layer of deep neural networks was not identified. Objective: The primary objective of this analysis was to describe which activation function must be used at hidden layers for deep neural networks to solve complex non-linear problems. Methods: The configuration for this comparative model was used by using the datasets of 2 classes (Cat/Dog). The number of Convolutional layer used in this network was 3 and the pooling layer was also introduced after each layer of CNN layer. The total of the dataset was divided into the two parts. The first 8000 images were mainly used for training the network and the next 2000 images were used for testing the network. Results: The experimental comparison was done by analyzing the network by taking different activation functions on each layer of CNN network. The validation error and accuracy on Cat/Dog dataset were analyzed using activation functions (ReLU, Tanh, Selu, PRelu, Elu) at number of hidden layers. Overall the Relu gave best performance with the validation loss at 25th Epoch 0.3912 and validation accuracy at 25th Epoch 0.8320. Conclusion: It is found that a CNN model with ReLU hidden layers (3 hidden layers here) gives best results and improve overall performance better in term of accuracy and speed. These advantages of ReLU in CNN at number of hidden layers are helpful to effectively and fast retrieval of images from the databases.


Sign in / Sign up

Export Citation Format

Share Document