Compact analogue neural network: a new paradigm for neural based combinatorial optimisation

1999 ◽  
Vol 146 (3) ◽  
pp. 111 ◽  
Author(s):  
Jayadeva ◽  
S.C. Dutta Roy ◽  
A. Chaudhary
2016 ◽  
pp. 368-395
Author(s):  
Eliano Pessa

The Artificial Neural Network (ANN) models gained a wide popularity owing to a number of claimed advantages such as biological plausibility, tolerance with respect to errors or noise in the input data, learning ability allowing an adaptability to environmental constraints. Notwithstanding the fact that most of these advantages are not typical only of ANNs, engineers, psychologists and neuroscientists made an extended use of ANN models in a large number of scientific investigations. In most cases, however, these models have been introduced in order to provide optimization tools more useful than the ones commonly used by traditional Optimization Theory. Unfortunately, just the successful performance of ANN models in optimization tasks produced a widespread neglect of the true – and important – objectives pursued by the first promoters of these models. These objectives can be shortly summarized by the manifesto of connectionist psychology, stating that mental processes are nothing but macroscopic phenomena, emergent from the cooperative interaction of a large number of microscopic knowledge units. This statement – wholly in line with the goal of statistical mechanics – can be readily extended to other processes, beyond the mental ones, including social, economic, and, in general, organizational ones. Therefore this chapter has been designed in order to answer a number of related questions, such as: are the ANN models able to grant for the occurrence of this sort of emergence? How can the occurrence of this emergence be empirically detected? How can the emergence produced by ANN models be controlled? In which sense the ANN emergence could offer a new paradigm for the explanation of macroscopic phenomena? Answering these questions induces to focus the chapter on less popular ANNs, such as the recurrent ones, while neglecting more popular models, such as perceptrons, and on less used units, such as spiking neurons, rather than on McCulloch-Pitts neurons. Moreover, the chapter must mention a number of strategies of emergence detection, useful for researchers performing computer simulations of ANN behaviours. Among these strategies it is possible to quote the reduction of ANN models to continuous models, such as the neural field models or the neural mass models, the recourse to the methods of Network Theory and the employment of techniques borrowed by Statistical Physics, like the one based on the Renormalization Group. Of course, owing to space (and mathematical expertise) requirements, most mathematical details of the proposed arguments are neglected, and, to gain more information, the reader is deferred to the quoted literature.


2012 ◽  
Vol 8 (12) ◽  
pp. 711-716 ◽  
Author(s):  
John H. Zhang ◽  
Jerome Badaut ◽  
Jiping Tang ◽  
Andre Obenaus ◽  
Richard Hartman ◽  
...  

Mathematics ◽  
2019 ◽  
Vol 7 (11) ◽  
pp. 1133 ◽  
Author(s):  
Mohd Shareduwan Mohd Kasihmuddin ◽  
Mohd. Asyraf Mansor ◽  
Md Faisal Md Basir ◽  
Saratha Sathasivam

The dynamic behaviours of an artificial neural network (ANN) system are strongly dependent on its network structure. Thus, the output of ANNs has long suffered from a lack of interpretability and variation. This has severely limited the practical usability of the logical rule in the ANN. The work presents an integrated representation of k-satisfiability (kSAT) in a mutation hopfield neural network (MHNN). Neuron states of the hopfield neural network converge to minimum energy, but the solution produced is confined to the limited number of solution spaces. The MHNN is incorporated with the global search capability of the estimation of distribution algorithms (EDAs), which typically explore various solution spaces. The main purpose is to estimate other possible neuron states that lead to global minimum energy through available output measurements. Furthermore, it is shown that the MHNN can retrieve various neuron states with the lowest minimum energy. Subsequent simulations performed on the MHNN reveal that the approach yields a result that surpasses the conventional hybrid HNN. Furthermore, this study provides a new paradigm in the field of neural networks by overcoming the overfitting issue.


Polymers ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 2628
Author(s):  
Aref Ghaderi ◽  
Vahid Morovati ◽  
Roozbeh Dargazany

In solid mechanics, data-driven approaches are widely considered as the new paradigm that can overcome the classic problems of constitutive models such as limiting hypothesis, complexity, and accuracy. However, the implementation of machine-learned approaches in material modeling has been modest due to the high-dimensionality of the data space, the significant size of missing data, and limited convergence. This work proposes a framework to hire concepts from polymer science, statistical physics, and continuum mechanics to provide super-constrained machine-learning techniques of reduced-order to partly overcome the existing difficulties. Using a sequential order-reduction, we have simplified the 3D stress–strain tensor mapping problem into a limited number of super-constrained 1D mapping problems. Next, we introduce an assembly of multiple replicated neural network learning agents (L-agents) to systematically classify those mapping problems into a few categories, each of which were described by a distinct agent type. By capturing all loading modes through a simplified set of dispersed experimental data, the proposed hybrid assembly of L-agents provides a new generation of machine-learned approaches that simply outperform most constitutive laws in training speed, and accuracy even in complicated loading scenarios. Interestingly, the physics-based nature of the proposed model avoids the low interpretability of conventional machine-learned models.


1999 ◽  
Vol 09 (04) ◽  
pp. 351-370 ◽  
Author(s):  
M. SREENIVASA RAO ◽  
ARUN K. PUJARI

A new paradigm of neural network architecture is proposed that works as associative memory along with capabilities of pruning and order-sensitive learning. The network has a composite structure wherein each node of the network is a Hopfield network by itself. The Hopfield network employs an order-sensitive learning technique and converges to user-specified stable states without having any spurious states. This is based on geometrical structure of the network and of the energy function. The network is so designed that it allows pruning in binary order as it progressively carries out associative memory retrieval. The capacity of the network is 2n, where n is the number of basic nodes in the network. The capabilities of the network are demonstrated by experimenting on three different application areas, namely a Library Database, a Protein Structure Database and Natural Language Understanding.


Sign in / Sign up

Export Citation Format

Share Document