scholarly journals Deep Neural Network Approximated Dynamic Programming for Combinatorial Optimization

2020 ◽  
Vol 34 (02) ◽  
pp. 1684-1691
Author(s):  
Shenghe Xu ◽  
Shivendra S. Panwar ◽  
Murali Kodialam ◽  
T.V. Lakshman

In this paper, we propose a general framework for combining deep neural networks (DNNs) with dynamic programming to solve combinatorial optimization problems. For problems that can be broken into smaller subproblems and solved by dynamic programming, we train a set of neural networks to replace value or policy functions at each decision step. Two variants of the neural network approximated dynamic programming (NDP) methods are proposed; in the value-based NDP method, the networks learn to estimate the value of each choice at the corresponding step, while in the policy-based NDP method the DNNs only estimate the best decision at each step. The training procedure of the NDP starts from the smallest problem size and a new DNN for the next size is trained to cooperate with previous DNNs. After all the DNNs are trained, the networks are fine-tuned together to further improve overall performance. We test NDP on the linear sum assignment problem, the traveling salesman problem and the talent scheduling problem. Experimental results show that NDP can achieve considerable computation time reduction on hard problems with reasonable performance loss. In general, NDP can be applied to reducible combinatorial optimization problems for the purpose of computation time reduction.

Author(s):  
Enrique Mérida-Casermeiro ◽  
Domingo López-Rodríguez ◽  
Juan M. Ortiz-de-Lazcano-Lobato

Since McCulloch and Pitts’ seminal work (McCulloch & Pitts, 1943), several models of discrete neural networks have been proposed, many of them presenting the ability of assigning a discrete value (other than unipolar or bipolar) to the output of a single neuron. These models have focused on a wide variety of applications. One of the most important models was developed by J. Hopfield in (Hopfield, 1982), which has been successfully applied in fields such as pattern and image recognition and reconstruction (Sun et al., 1995), design of analogdigital circuits (Tank & Hopfield, 1986), and, above all, in combinatorial optimization (Hopfield & Tank, 1985) (Takefuji, 1992) (Takefuji & Wang, 1996), among others. The purpose of this work is to review some applications of multivalued neural models to combinatorial optimization problems, focusing specifically on the neural model MREM, since it includes many of the multivalued models in the specialized literature.


1995 ◽  
Vol 115 (3) ◽  
pp. 76-84
Author(s):  
Takahumi Oohori ◽  
Hiroaki Yamamoto ◽  
Nenso Setsu ◽  
Kazuhisa Watanabe

2015 ◽  
Vol 2015 ◽  
pp. 1-14
Author(s):  
Oscar Montiel ◽  
Francisco Javier Díaz Delgadillo

Nowadays, solving optimally combinatorial problems is an open problem. Determining the best arrangement of elements proves being a very complex task that becomes critical when the problem size increases. Researchers have proposed various algorithms for solving Combinatorial Optimization Problems (COPs) that take into account the scalability; however, issues are still presented with larger COPs concerning hardware limitations such as memory and CPU speed. It has been shown that the Reduce-Optimize-Expand (ROE) method can solve COPs faster with the same resources; in this methodology, the reduction step is the most important procedure since inappropriate reductions, applied to the problem, will produce suboptimal results on the subsequent stages. In this work, an algorithm to improve the reduction step is proposed. It is based on a fuzzy inference system to classify portions of the problem and remove them, allowing COPs solving algorithms to utilize better the hardware resources by dealing with smaller problem sizes, and the use of metadata and adaptive heuristics. The Travelling Salesman Problem has been used as a case of study; instances that range from 343 to 3056 cities were used to prove that the fuzzy logic approach produces a higher percentage of successful reductions.


2004 ◽  
Vol 18 (17n19) ◽  
pp. 2579-2584 ◽  
Author(s):  
Y. C. FENG ◽  
X. CAI

A transiently chaotic neural network (TCNN) is an approximation method for combinatorial optimization problems. The evolution function of self-back connect weight, called annealing function, influences the accurate and search speed of TCNN model. This paper analyzes two common annealing schemes. Furthermore we proposed a new subsection exponential annealing function. Finally, we compared these annealing schemes in TSP problem.


2001 ◽  
Vol 11 (06) ◽  
pp. 561-572 ◽  
Author(s):  
ROSELI A. FRANCELIN ROMERO ◽  
JANUSZ KACPRYZK ◽  
FERNANDO GOMIDE

An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.


Author(s):  
Yuxin Ding

Traditional Hopfield networking has been widely used to solve combinatorial optimization problems. However, high order Hopfiled networks, as an expansion of traditional Hopfield networks, are seldom used to solve combinatorial optimization problems. In theory, compared with low order networks, high order networks have better properties, such as stronger approximations and faster convergence rates. In this chapter, the authors focus on how to use high order networks to model combinatorial optimization problems. Firstly, the high order discrete Hopfield Network is introduced, then the authors discuss how to find the high order inputs of a neuron. Finally, the construction method of energy function and the neural computing algorithm are presented. In this chapter, the N queens problem and the crossbar switch problem, which are NP-complete problems, are used as examples to illustrate how to model practical problems using high order neural networks. The authors also discuss the performance of high order networks for modeling the two combinatorial optimization problems.


Sign in / Sign up

Export Citation Format

Share Document