A NEW METHOD FOR MAPPING OPTIMIZATION PROBLEMS ONTO NEURAL NETWORKS

1989 ◽  
Vol 01 (01) ◽  
pp. 3-22 ◽  
Author(s):  
Carsten Peterson ◽  
Bo Söderberg

A novel modified method for obtaining approximate solutions to difficult optimization problems within the neural network paradigm is presented. We consider the graph partition and the travelling salesman problems. The key new ingredient is a reduction of solution space by one dimension by using graded neurons, thereby avoiding the destructive redundancy that has plagued these problems when using straightforward neural network techniques. This approach maps the problems onto Potts glass rather than spin glass theories. A systematic prescription is given for estimating the phase transition temperatures in advance, which facilitates the choice of optimal parameters. This analysis, which is performed for both serial and synchronous updating of the mean field theory equations, makes it possible to consistently avoid chaotic behavior. When exploring this new technique numerically we find the results very encouraging; the quality of the solutions are in parity with those obtained by using optimally tuned simulated annealing heuristics. Our numerical study, which for TSP extends to 200-city problems, exhibits an impressive level of parameter insensitivity.

1994 ◽  
Vol 6 (3) ◽  
pp. 341-356 ◽  
Author(s):  
A. L. Yuille ◽  
J. J. Kosowsky

In recent years there has been significant interest in adapting techniques from statistical physics, in particular mean field theory, to provide deterministic heuristic algorithms for obtaining approximate solutions to optimization problems. Although these algorithms have been shown experimentally to be successful there has been little theoretical analysis of them. In this paper we demonstrate connections between mean field theory methods and other approaches, in particular, barrier function and interior point methods. As an explicit example, we summarize our work on the linear assignment problem. In this previous work we defined a number of algorithms, including deterministic annealing, for solving the assignment problem. We proved convergence, gave bounds on the convergence times, and showed relations to other optimization algorithms.


2001 ◽  
Vol 11 (06) ◽  
pp. 561-572 ◽  
Author(s):  
ROSELI A. FRANCELIN ROMERO ◽  
JANUSZ KACPRYZK ◽  
FERNANDO GOMIDE

An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.


2007 ◽  
Vol 19 (12) ◽  
pp. 3262-3292 ◽  
Author(s):  
Hédi Soula ◽  
Carson C. Chow

We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.


1988 ◽  
Vol 85 (6) ◽  
pp. 1973-1977 ◽  
Author(s):  
L. N. Cooper ◽  
C. L. Scofield

Author(s):  
Alejandro García ◽  
Isaac Chairez ◽  
Alexander Poznyak

The following chapter tackles the nonparametric identification and the state estimation for uncertain chaotic systems by the dynamic neural network approach. The developed algorithms consider the presence of additive noise in the state, for the case of identification, and in the measurable output, for the state estimation case. Mathematical model of the chaotic system is considered unknown, only the chaotic behavior as well as the maximal and minimal bound for each one of state variables are taking into account in the algorithm. Mathematical analysis and simulation results are presented. Application considering the so-called electronic Chua’s circuit is carried out; particularly a scheme of information encryption by the neural network observer with a noisy transmission is showed. Formal mathematical proofs and figures, illustrate the robustness of proposed algorithms mainly in the presence of noises with high magnitude.


2003 ◽  
Vol 15 (4) ◽  
pp. 915-936 ◽  
Author(s):  
A. L. Yuille ◽  
Anand Rangarajan

The concave-convex procedure (CCCP) is a way to construct discrete-time iterative dynamical systems that are guaranteed to decrease global optimization and energy functions monotonically. This procedure can be applied to almost any optimization problem, and many existing algorithms can be interpreted in terms of it. In particular, we prove that all expectation-maximization algorithms and classes of Legendre minimization and variational bounding algorithms can be reexpressed in terms of CCCP. We show that many existing neural network and mean-field theory algorithms are also examples of CCCP. The generalized iterative scaling algorithm and Sinkhorn's algorithm can also be expressed as CCCP by changing variables. CCCP can be used both as a new way to understand, and prove the convergence of, existing optimization algorithms and as a procedure for generating new algorithms.


Proceedings ◽  
2019 ◽  
Vol 46 (1) ◽  
pp. 4
Author(s):  
Hung Diep ◽  
Miron Kaufman ◽  
Sanda Kaufman

Statistical physics models of social systems with a large number of members, each interacting with a subset of others, have been used in very diverse domains such as culture dynamics, crowd behavior, information dissemination and social conflicts. We observe that such models rely on the fact that large societal groups display surprising regularities despite individual agency. Unlike physics phenomena that obey Newton’s third law, in the world of humans the magnitudes of action and reaction are not necessarily equal. The effect of the actions of group n on group m can differ from the effect of group m on group n. We thus use the spin language to describe humans with this observation in mind. Note that particular individual behaviors do not survive in statistical averages. Only common characteristics remain. We have studied two-group conflicts as well as three-group conflicts. We have used time-dependent Mean-Field Theory and Monte Carlo simulations. Each group is defined by two parameters which express the intra-group strength of interaction among members and its attitude toward negotiations. The interaction with the other group is parameterized by a constant which expresses an attraction or a repulsion to other group average attitude. The model includes a social temperature T which acts on each group and quantifies the social noise. One of the most striking features is the periodic oscillation of the attitudes toward negotiation or conflict for certain ranges of parameter values. Other striking results include chaotic behavior, namely intractable, unpredictable conflict outcomes.


Sign in / Sign up

Export Citation Format

Share Document