Binary Neural Network Approaches to Combinatorial Optimization Problems in Communication Networks

Author(s):  
Nobuo Funabikil
2004 ◽  
Vol 18 (17n19) ◽  
pp. 2579-2584 ◽  
Author(s):  
Y. C. FENG ◽  
X. CAI

A transiently chaotic neural network (TCNN) is an approximation method for combinatorial optimization problems. The evolution function of self-back connect weight, called annealing function, influences the accurate and search speed of TCNN model. This paper analyzes two common annealing schemes. Furthermore we proposed a new subsection exponential annealing function. Finally, we compared these annealing schemes in TSP problem.


2020 ◽  
Vol 34 (02) ◽  
pp. 1684-1691
Author(s):  
Shenghe Xu ◽  
Shivendra S. Panwar ◽  
Murali Kodialam ◽  
T.V. Lakshman

In this paper, we propose a general framework for combining deep neural networks (DNNs) with dynamic programming to solve combinatorial optimization problems. For problems that can be broken into smaller subproblems and solved by dynamic programming, we train a set of neural networks to replace value or policy functions at each decision step. Two variants of the neural network approximated dynamic programming (NDP) methods are proposed; in the value-based NDP method, the networks learn to estimate the value of each choice at the corresponding step, while in the policy-based NDP method the DNNs only estimate the best decision at each step. The training procedure of the NDP starts from the smallest problem size and a new DNN for the next size is trained to cooperate with previous DNNs. After all the DNNs are trained, the networks are fine-tuned together to further improve overall performance. We test NDP on the linear sum assignment problem, the traveling salesman problem and the talent scheduling problem. Experimental results show that NDP can achieve considerable computation time reduction on hard problems with reasonable performance loss. In general, NDP can be applied to reducible combinatorial optimization problems for the purpose of computation time reduction.


2019 ◽  
Vol 4 (2019) ◽  
pp. 3-12
Author(s):  
Fatma Mbarek ◽  
Volodymyr Mosorov

Combinatorial optimization challenges are rooted in real-life problems, continuous optimization problems, discrete optimization problems and other significant problems in telecommunications which include, for example, routing, design of communication networks and load balancing. Load balancing applies to distributed systems and is used for managing web clusters. It allows to forward the load between web servers, using several scheduling algorithms. The main motivation for the study is the fact that combinatorial optimization problems can be solved by applying optimization algorithms. These algorithms include ant colony optimization (ACO), honey bee (HB) and multi-objective optimization (MOO). ACO and HB algorithms are inspired by the foraging behavior of ants and bees which use the process to locate and gather food. However, these two algorithms have been suggested to handle optimization problems with a single-objective. In this context, ACO and HB have to be adjusted to multiobjective optimization problems. This paper provides a summary of the surveyed optimization algorithms and discusses the adaptations of these three algorithms. This is pursued by a detailed analysis and a comparison of three major scheduling techniques mentioned above, as well as three other, new algorithms (resulting from the combination of the aforementioned techniques) used to efficiently handle load balancing issues.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Z. Fahimi ◽  
M. R. Mahmoodi ◽  
H. Nili ◽  
Valentin Polishchuk ◽  
D. B. Strukov

AbstractThe increasing utility of specialized circuits and growing applications of optimization call for the development of efficient hardware accelerator for solving optimization problems. Hopfield neural network is a promising approach for solving combinatorial optimization problems due to the recent demonstrations of efficient mixed-signal implementation based on emerging non-volatile memory devices. Such mixed-signal accelerators also enable very efficient implementation of various annealing techniques, which are essential for finding optimal solutions. Here we propose a “weight annealing” approach, whose main idea is to ease convergence to the global minima by keeping the network close to its ground state. This is achieved by initially setting all synaptic weights to zero, thus ensuring a quick transition of the Hopfield network to its trivial global minima state and then gradually introducing weights during the annealing process. The extensive numerical simulations show that our approach leads to a better, on average, solutions for several representative combinatorial problems compared to prior Hopfield neural network solvers with chaotic or stochastic annealing. As a proof of concept, a 13-node graph partitioning problem and a 7-node maximum-weight independent set problem are solved experimentally using mixed-signal circuits based on, correspondingly, a 20 × 20 analog-grade TiO2 memristive crossbar and a 12 × 10 eFlash memory array.


2005 ◽  
Vol 68 ◽  
pp. 297-305 ◽  
Author(s):  
Hiroki Tamura ◽  
Zongmei Zhang ◽  
Xinshun Xu ◽  
Masahiro Ishii ◽  
Zheng Tang

Sign in / Sign up

Export Citation Format

Share Document