Average number of fixed points and attractors in Hopfield neural networks

2018 ◽  
Vol 29 (08) ◽  
pp. 1850076
Author(s):  
Jiandu Liu ◽  
Bokui Chen ◽  
Dengcheng Yan ◽  
Lei Wang

Calculating the exact number of fixed points and attractors of an arbitrary Hopfield neural network is a non-deterministic polynomial (NP)-hard problem. In this paper, we first calculate the average number of fixed points in such networks versus their size and threshold of neurons, in terms of a statistical method, which has been applied to the calculation of the average number of metastable states in spin glass systems. Then the same method is expanded to study the average number of attractors in such networks. The results of the calculation qualitatively agree well with the numerical calculation. The discrepancies between them are also well explained.

2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Xia Huang ◽  
Zhen Wang ◽  
Yuxia Li

A fractional-order two-neuron Hopfield neural network with delay is proposed based on the classic well-known Hopfield neural networks, and further, the complex dynamical behaviors of such a network are investigated. A great variety of interesting dynamical phenomena, including single-periodic, multiple-periodic, and chaotic motions, are found to exist. The existence of chaotic attractors is verified by the bifurcation diagram and phase portraits as well.


2021 ◽  
pp. 1-15
Author(s):  
Masaki Kobayashi

Abstract A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model. A quaternion-valued Hopfield neural network (QHNN) with a twin-multistate activation function was proposed to reduce the number of weight parameters of CHNN. Dual connections (DCs) are introduced to the QHNNs to improve the noise tolerance. The DCs take advantage of the noncommutativity of quaternions and consist of two weights between neurons. A QHNN with DCs provides much better noise tolerance than a CHNN. Although a CHNN and a QHNN with DCs have the samenumber of weight parameters, the storage capacity of projection rule for QHNNs with DCs is half of that for CHNNs and equals that of conventional QHNNs. The small storage capacity of QHNNs with DCs is caused by projection rule, not the architecture. In this work, the ebbian rule is introduced and proved by stochastic analysis that the storage capacity of a QHNN with DCs is 0.8 times as many as that of a CHNN.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Yanxia Sun ◽  
Zenghui Wang ◽  
Barend Jacobus van Wyk

A new neural network based optimization algorithm is proposed. The presented model is a discrete-time, continuous-state Hopfield neural network and the states of the model are updated synchronously. The proposed algorithm combines the advantages of traditional PSO, chaos and Hopfield neural networks: particles learn from their own experience and the experiences of surrounding particles, their search behavior is ergodic, and convergence of the swarm is guaranteed. The effectiveness of the proposed approach is demonstrated using simulations and typical optimization problems.


2018 ◽  
Vol 2018 ◽  
pp. 1-5 ◽  
Author(s):  
Masaki Kobayashi

A twin-multistate quaternion Hopfield neural network (TMQHNN) is a multistate Hopfield model and can store multilevel information, such as image data. Storage capacity is an important problem of Hopfield neural networks. Jankowski et al. approximated the crosstalk terms of complex-valued Hopfield neural networks (CHNNs) by the 2-dimensional normal distributions and evaluated their storage capacities. In this work, we evaluate the storage capacities of TMQHNNs based on their idea.


Author(s):  
Hazem El-Bakry ◽  
Nikos Mastorakis

In this chapter, an automatic determination algorithm for nuclear magnetic resonance (NMR) spectra of the metabolites in the living body by magnetic resonance spectroscopy (MRS) without human intervention or complicated calculations is presented. In such method, the problem of NMR spectrum determination is transformed into the determination of the parameters of a mathematical model of the NMR signal. To calculate these parameters efficiently, a new model called modified Hopfield neural network is designed. The main achievement of this chapter over the work in literature (Morita, N. and Konishi, O., 2004) is that the speed of the modified Hopfield neural network is accelerated. This is done by applying cross correlation in the frequency domain between the input values and the input weights. The modified Hopfield neural network can accomplish complex dignals perfectly with out any additinal computation steps. This is a valuable advantage as NMR signals are complex-valued. In addition, a technique called “modified sequential extension of section (MSES)” that takes into account the damping rate of the NMR signal is developed to be faster than that presented in (Morita, N. and Konishi, O., 2004). Simulation results show that the calculation precision of the spectrum improves when MSES is used along with the neural network. Furthermore, MSES is found to reduce the local minimum problem in Hopfield neural networks. Moreover, the performance of the proposed method is evaluated and there is no effect on the performance of calculations when using the modified Hopfield neural networks.


2018 ◽  
Vol 7 (3.12) ◽  
pp. 652
Author(s):  
Monurajan P ◽  
Ruhanbevi A ◽  
Manjula J

Artificial Neural Networks are interconnection of neurons inspired from the biological neural network of the brain. ANN is claimed to rule the future, spreads its wings to various areas of interest to name a few such as optimization, information technology, cryptography, image processing and even in medical diagnosis. There are devices which possess synaptic behaviour, one such device is memristor. Bridge circuit of memristors can be combined together to form neurons. Neurons can be made into a network with appropriate parameters to store data or images. Hopfield neural networks are chosen to store the data in associative memory. Hopfield neural networks are a significant feature in ANN which are recurrent in nature and in general are used as associative memory and in solving optimization problems such as the Travelling Salesman Problem. The paper deals on the construction of memristive Hopfield neural network using memristor bridging circuit and its application in the associative memory. This paper also illustrates the experiment with mathematical equations and the associative memory concept of the network using Matlab.  


2012 ◽  
Vol 2012 ◽  
pp. 1-5 ◽  
Author(s):  
Nasser-eddine Tatar

For the Hopfield Neural Network problem we consider unbounded monotone nondecreasing activation functions. We prove convergence to zero in an exponential manner provided that we start with sufficiently small initial data.


2017 ◽  
Vol 2017 ◽  
pp. 1-6 ◽  
Author(s):  
Masaki Kobayashi

Many models of neural networks have been extended to complex-valued neural networks. A complex-valued Hopfield neural network (CHNN) is a complex-valued version of a Hopfield neural network. Complex-valued neurons can represent multistates, and CHNNs are available for the storage of multilevel data, such as gray-scale images. The CHNNs are often trapped into the local minima, and their noise tolerance is low. Lee improved the noise tolerance of the CHNNs by detecting and exiting the local minima. In the present work, we propose a new recall algorithm that eliminates the local minima. We show that our proposed recall algorithm not only accelerated the recall but also improved the noise tolerance through computer simulations.


Author(s):  
Silviani E Rumagit ◽  
Azhari SN

AbstrakLatar Belakang penelitian ini dibuat dimana semakin meningkatnya kebutuhan listrik di setiap kelompok tarif. Yang dimaksud dengan kelompok tarif dalam penelitian ini adalah kelompok tarif sosial, kelompok tarif rumah tangga, kelompok tarif bisnis, kelompok tarif industri dan kelompok tarif pemerintah. Prediksi merupakan kebutuhan penting bagi penyedia tenaga listrik dalam mengambil keputusan berkaitan dengan ketersediaan energi listik. Dalam melakukan prediksi dapat dilakukan dengan metode statistik maupun kecerdasan buatan.            ARIMA merupakan salah satu metode statistik yang banyak digunakan untuk prediksi dimana ARIMA mengikuti model autoregressive (AR) moving average (MA). Syarat dari ARIMA adalah data harus stasioner, data yang tidak stasioner harus distasionerkan dengan differencing. Selain metode statistik, prediksi juga dapat dilakukan dengan teknik kecerdasan buatan, dimana dalam penelitian ini jaringan syaraf tiruan backpropagation dipilih untuk melakukan prediksi. Dari hasil pengujian yang dilakukan selisih MSE ARIMA, JST dan penggabungan ARIMA, jaringan syaraf tiruan tidak berbeda secara signifikan. Kata Kunci— ARIMA, jaringan syaraf tiruan, kelompok tarif.  AbstractBackground this research was made where the increasing demand for electricity in each group. The meaning this group is social, the household, business, industry groups and the government fare. Prediction is an important requirement for electricity providers in making decisions related to the availability of electric energy. In doing predictions can be made by statistical methods and artificial intelligence.            ARIMA is a statistical method that is widely used to predict where the ARIMA modeled autoregressive (AR) moving average (MA). Terms of ARIMA is the data must be stationary, the data is not stationary should be stationary  use differencing. In addition to the statistical method, predictions can also be done by artificial intelligence techniques, which in this study selected Backpropagation neural network to predict. From the results of tests made the difference in MSE ARIMA, ANN and merging ARIMA, artificial neural networks are not significantly different. Keyword—ARIMA, neural network, tarif groups


2020 ◽  
Vol 32 (11) ◽  
pp. 2237-2248
Author(s):  
Masaki Kobayashi

A complex-valued Hopfield neural network (CHNN) with a multistate activation function is a multistate model of neural associative memory. The weight parameters need a lot of memory resources. Twin-multistate activation functions were introduced to quaternion- and bicomplex-valued Hopfield neural networks. Since their architectures are much more complicated than that of CHNN, the architecture should be simplified. In this work, the number of weight parameters is reduced by bicomplex projection rule for CHNNs, which is given by the decomposition of bicomplex-valued Hopfield neural networks. Computer simulations support that the noise tolerance of CHNN with a bicomplex projection rule is equal to or even better than that of quaternion- and bicomplex-valued Hopfield neural networks. By computer simulations, we find that the projection rule for hyperbolic-valued Hopfield neural networks in synchronous mode maintains a high noise tolerance.


Sign in / Sign up

Export Citation Format

Share Document