hopfield network
Recently Published Documents


TOTAL DOCUMENTS

442
(FIVE YEARS 46)

H-INDEX

25
(FIVE YEARS 2)

Author(s):  
Awatif Karim ◽  
Chakir Loqman ◽  
Youssef Hami ◽  
Jaouad Boumhidi

In this paper, we propose a new approach to solve the document-clustering using the K-Means algorithm. The latter is sensitive to the random selection of the k cluster centroids in the initialization phase. To evaluate the quality of K-Means clustering we propose to model the text document clustering problem as the max stable set problem (MSSP) and use continuous Hopfield network to solve the MSSP problem to have initial centroids. The idea is inspired by the fact that MSSP and clustering share the same principle, MSSP consists to find the largest set of nodes completely disconnected in a graph, and in clustering, all objects are divided into disjoint clusters. Simulation results demonstrate that the proposed K-Means improved by MSSP (KM_MSSP) is efficient of large data sets, is much optimized in terms of time, and provides better quality of clustering than other methods.


Algorithms ◽  
2021 ◽  
Vol 15 (1) ◽  
pp. 11
Author(s):  
Fekhr Eddine Keddous ◽  
Amir Nakib

Convolutional neural networks (CNNs) have powerful representation learning capabilities by automatically learning and extracting features directly from inputs. In classification applications, CNN models are typically composed of: convolutional layers, pooling layers, and fully connected (FC) layer(s). In a chain-based deep neural network, the FC layers contain most of the parameters of the network, which affects memory occupancy and computational complexity. For many real-world problems, speeding up inference time is an important matter because of the hardware design implications. To deal with this problem, we propose the replacement of the FC layers with a Hopfield neural network (HNN). The proposed architecture combines both a CNN and an HNN: A pretrained CNN model is used for feature extraction, followed by an HNN, which is considered as an associative memory that saves all features created by the CNN. Then, to deal with the limitation of the storage capacity of the HNN, the proposed work uses multiple HNNs. To optimize this step, the knapsack problem formulation is proposed, and a genetic algorithm (GA) is used solve it. According to the results obtained on the Noisy MNIST Dataset, our work outperformed the state-of-the-art algorithms.


Author(s):  
Youssef Hami ◽  
Chakir Loqman

This research is an optimal allocation of tasks to processors in order to minimize the total costs of execution and communication. This problem is called the Task Assignment Problem (TAP) with nonuniform communication costs. To solve the latter, the first step concerns the formulation of the problem by an equivalent zero-one quadratic program with a convex objective function using a convexification technique, based on the smallest eigenvalue. The second step concerns the application of the Continuous Hopfield Network (CHN) to solve the obtained problem. The calculation results are presented for the instances from the literature, compared to solutions obtained both the CPLEX solver and by the heuristic genetic algorithm, and show an improvement in the results obtained by applying only the CHN algorithm. We can see that the proposed approach evaluates the efficiency of the theoretical results and achieves the optimal solutions in a short calculation time.


Author(s):  
Elena Agliari ◽  
Linda Albanese ◽  
Francesco Alemanno ◽  
Alberto Fachechi

Abstract We consider a multi-layer Sherrington-Kirkpatrick spin-glass as a model for deep restricted Boltzmann machines with quenched random weights and solve for its free energy in the thermodynamic limit by means of Guerra's interpolating techniques under the RS and 1RSB ansatz. In particular, we recover the expression already known for the replica-symmetric case. Further, we drop the restriction constraint by introducing intra-layer connections among spins and we show that the resulting system can be mapped into a modular Hopfield network, which is also addressed via the same techniques up to the first step of replica symmetry breaking.


2021 ◽  
Vol 3 (2) ◽  
Author(s):  
Alexander Zlokapa ◽  
Abhishek Anand ◽  
Jean-Roch Vlimant ◽  
Javier M. Duarte ◽  
Joshua Job ◽  
...  

AbstractAt the High Luminosity Large Hadron Collider (HL-LHC), traditional track reconstruction techniques that are critical for physics analysis will need to be upgraded to scale with track density. Quantum annealing has shown promise in its ability to solve combinatorial optimization problems amidst an ongoing effort to establish evidence of a quantum speedup. As a step towards exploiting such potential speedup, we investigate a track reconstruction approach by adapting the existing geometric Denby-Peterson (Hopfield) network method to the quantum annealing framework for HL-LHC conditions. We develop additional techniques to embed the problem onto existing and near-term quantum annealing hardware. Results using simulated annealing and quantum annealing with the D-Wave 2X system on the TrackML open dataset are presented, demonstrating the successful application of a quantum annealing algorithm to the track reconstruction challenge. We find that combinatorial optimization problems can effectively reconstruct tracks, suggesting possible applications for fast hardware-specific implementations at the HL-LHC while leaving open the possibility of a quantum speedup for tracking.


2021 ◽  
pp. 1-10
Author(s):  
Sneha Aenugu ◽  
David E. Huber

Abstract Rizzuto and Kahana (2001) applied an autoassociative Hopfield network to a paired-associate word learning experiment in which (1) participants studied word pairs (e.g., ABSENCE-HOLLOW), (2) were tested in one direction (ABSENCE-?) on a first test, and (3) were tested in the same direction again or in the reverse direction (?-HOLLOW) on a second test. The model contained a correlation parameter to capture the dependence between forward versus backward learning between the two words of a word pair, revealing correlation values close to 1.0 for all participants, consistent with neural network models that use the same weight for communication in both directions between nodes. We addressed several limitations of the model simulations and proposed two new models incorporating retrieval practice learning (e.g., the effect of the first test on the second) that fit the accuracy data more effectively, revealing substantially lower correlation values (average of .45 across participants, with zero correlation for some participants). In addition, we analyzed recall latencies, finding that second test recall was faster in the same direction after a correct first test. Only a model with stochastic retrieval practice learning predicted this effect. In conclusion, recall accuracy and recall latency suggest asymmetric learning, particularly in light of retrieval practice effects.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Z. Fahimi ◽  
M. R. Mahmoodi ◽  
H. Nili ◽  
Valentin Polishchuk ◽  
D. B. Strukov

AbstractThe increasing utility of specialized circuits and growing applications of optimization call for the development of efficient hardware accelerator for solving optimization problems. Hopfield neural network is a promising approach for solving combinatorial optimization problems due to the recent demonstrations of efficient mixed-signal implementation based on emerging non-volatile memory devices. Such mixed-signal accelerators also enable very efficient implementation of various annealing techniques, which are essential for finding optimal solutions. Here we propose a “weight annealing” approach, whose main idea is to ease convergence to the global minima by keeping the network close to its ground state. This is achieved by initially setting all synaptic weights to zero, thus ensuring a quick transition of the Hopfield network to its trivial global minima state and then gradually introducing weights during the annealing process. The extensive numerical simulations show that our approach leads to a better, on average, solutions for several representative combinatorial problems compared to prior Hopfield neural network solvers with chaotic or stochastic annealing. As a proof of concept, a 13-node graph partitioning problem and a 7-node maximum-weight independent set problem are solved experimentally using mixed-signal circuits based on, correspondingly, a 20 × 20 analog-grade TiO2 memristive crossbar and a 12 × 10 eFlash memory array.


Author(s):  
Jawad Oubaha ◽  
Noureddine Lakki ◽  
Ali Ouacha

<p><span id="docs-internal-guid-75ba661c-7fff-7cfb-9afe-b7c901c3fe82"><span>The most complex problems, in data science and more specifically in artificial intelligence, can be modeled as cases of the maximum stable set problem (MSSP). this article describes a new approach to solve the MSSP problem by proposing the continuous hopfield network (CHN) to build optimized link state protocol routing (OLSR) protocol cluster. our approach consists in proposing in two stages: the first acts at the level of the choice of the OLSR master cluster in order to quickly make a local minimum using the CHN, by modeling the MSSP problem. As for the second step, the objective is the improvement of the precision making a solution of efficient at the first rank of neighborhood as a linear constraint, and at the end, to find the resolution of the model using the CHN. We will show that this model determines a good solution of the MSSP problem. To test the theoretical results, we propose a comparison with a classic OLSR.</span></span></p>


Author(s):  
Tet Yeap

A trainable analog restricted Hopfield Network is presented in this paper. It consists of two layers of nodes, visible and hidden nodes, connected by weighted directional paths forming a bipartite graph with no intralayer connection. An energy or Lyapunov function was derived to show that the proposed network will converge to stable states. The proposed network can be trained using either the modified SPSA or BPTT algorithms to ensure that all the weights are symmetric. Simulation results show that the presence of hidden nodes increases the network’s memory capacity. Using EXOR as an example, the network can be trained to be a dynamic classifier. Using A, U, T, S as training characters, the network was trained to be an associative memory. Simulation results show that the network can perform perfect re-creation of noisy images. Its recreation performance has higher noise tolerance than the standard Hopfield Network and the Restricted Boltzmann Machine. Simulation results also illustrate the importance of feedback iteration in implementing associative memory to re-create from noisy images.


Sign in / Sign up

Export Citation Format

Share Document