A CONTINUATION APPROACH USING NCP FUNCTION FOR SOLVING MAX-CUT PROBLEM

2009 ◽  
Vol 26 (04) ◽  
pp. 445-456
Author(s):  
FENGMIN XU ◽  
CHENGXIAN XU ◽  
JIUQUAN REN

A continuous approach using NCP function for approximating the solution of the max-cut problem is proposed. The max-cut problem is relaxed into an equivalent nonlinearly constrained continuous optimization problem and a feasible direction method without line searches is presented for generating an optimal solution of the relaxed continuous optimization problem. The convergence of the algorithm is proved. Numerical experiments and comparisons on some max-cut test problems show that we can get the satisfactory solution of max-cut problems with less computation time. Furthermore, this is the first time that the feasible direction method is combined with NCP function for solving max-cut problem, and similar idea can be generalized to other combinatorial optimization problems.

2011 ◽  
Vol 421 ◽  
pp. 559-563
Author(s):  
Yong Chao Gao ◽  
Li Mei Liu ◽  
Heng Qian ◽  
Ding Wang

The scale and complexity of search space are important factors deciding the solving difficulty of an optimization problem. The information of solution space may lead searching to optimal solutions. Based on this, an algorithm for combinatorial optimization is proposed. This algorithm makes use of the good solutions found by intelligent algorithms, contracts the search space and partitions it into one or several optimal regions by backbones of combinatorial optimization solutions. And optimization of small-scale problems is carried out in optimal regions. Statistical analysis is not necessary before or through the solving process in this algorithm, and solution information is used to estimate the landscape of search space, which enhances the speed of solving and solution quality. The algorithm breaks a new path for solving combinatorial optimization problems, and the results of experiments also testify its efficiency.


2019 ◽  
Vol 5 (4) ◽  
pp. eaav2372 ◽  
Author(s):  
Hayato Goto ◽  
Kosuke Tatsumura ◽  
Alexander R. Dixon

Combinatorial optimization problems are ubiquitous but difficult to solve. Hardware devices for these problems have recently been developed by various approaches, including quantum computers. Inspired by recently proposed quantum adiabatic optimization using a nonlinear oscillator network, we propose a new optimization algorithm simulating adiabatic evolutions of classical nonlinear Hamiltonian systems exhibiting bifurcation phenomena, which we call simulated bifurcation (SB). SB is based on adiabatic and chaotic (ergodic) evolutions of nonlinear Hamiltonian systems. SB is also suitable for parallel computing because of its simultaneous updating. Implementing SB with a field-programmable gate array, we demonstrate that the SB machine can obtain good approximate solutions of an all-to-all connected 2000-node MAX-CUT problem in 0.5 ms, which is about 10 times faster than a state-of-the-art laser-based machine called a coherent Ising machine. SB will accelerate large-scale combinatorial optimization harnessing digital computer technologies and also offer a new application of computational and mathematical physics.


2013 ◽  
Vol 651 ◽  
pp. 879-884
Author(s):  
Qi Wang ◽  
Ying Min Wang ◽  
Yan Ni Gou

The matched field processing (MFP) for localization usually needs to match all the replica fields in the observation sea with the received fields, and then find the maximum peaks in the matched results, so how to find the maximum in the results effectively and quickly is a problem. As known the classical simulated annealing (CSA) which has the global optimization capability is used widely for combinatorial optimization problems. For passive localization the position of the source can be recognized as a combinatorial optimization problem about range and depth, so a new matched field processing based on CSA is proposed. In order to evaluate the performance of this method, the normal mode was used to calculate the replica field. Finally the algorithm was evaluated by the dataset in the Mediterranean Sea in 1994. Comparing to the conventional matched field passive localization (CMFP), it can be conclude that the new one can localize optimum peak successfully where the output power of CMFP is maximum, meanwhile it is faster than CMFP.


2013 ◽  
Vol 411-414 ◽  
pp. 1904-1910
Author(s):  
Kai Zhong Jiang ◽  
Tian Bo Wang ◽  
Zhong Tuan Zheng ◽  
Yu Zhou

An algorithm based on free search is proposed for the combinatorial optimization problems. In this algorithm, a feasible solution is converted into a full permutation of all the elements and a transformation of one solution into another solution can be interpreted the transformation of one permutation into another permutation. Then, the algorithm is combined with intersection elimination. The discrete free search algorithm greatly improves the convergence rate of the search process and enhances the quality of the results. The experiment results on TSP standard data show that the performance of the proposed algorithm is increased by about 2.7% than that of the genetic algorithm.


2020 ◽  
Author(s):  
Saavan Patel ◽  
Lili Chen ◽  
Philip Canoza ◽  
Sayeef Salahuddin

Abstract In this work we demonstrate usage of the Restricted Boltzmann Machine (RBM) as a stochastic neural network capable of solving NP-Hard Combinatorial Optimization problems efficiently. By mapping the RBM onto a reconfigurable Field Programmable Gate Array (FPGA), we can effectively hardware accelerate the RBM's stochastic sampling algorithm. We benchmark the RBM against the DWave 2000Q Quantum Adiabatic Computer and the Optical Coherent Ising Machine on two such optimization problems: the MAX-CUT problem and the Sherrington-Kirkpatrick (SK) spin glass. The hardware accelerated RBM shows asymptotic scaling either similar or better than these other accelerators. This leads to 107x and 105x time to solution improvement compared to the DWave 2000Q on the MAX-CUT and SK problems respectively, along with a 150x and 1000x improvement compared to the Coherent Ising Machine annealer on those problems. By utilizing commodity hardware running at room temperature, the RBM shows potential for immediate and scalable use.


Computation ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 75
Author(s):  
Angel E. Rodriguez-Fernandez ◽  
Bernardo Gonzalez-Torres ◽  
Ricardo Menchaca-Mendez ◽  
Peter F. Stadler

MAX-CUT is one of the well-studied NP-hard combinatorial optimization problems. It can be formulated as an Integer Quadratic Programming problem and admits a simple relaxation obtained by replacing the integer “spin” variables xi by unitary vectors v→i. The Goemans–Williamson rounding algorithm assigns the solution vectors of the relaxed quadratic program to a corresponding integer spin depending on the sign of the scalar product v→i·r→ with a random vector r→. Here, we investigate whether better graph cuts can be obtained by instead using a more sophisticated clustering algorithm. We answer this question affirmatively. Different initializations of k-means and k-medoids clustering produce better cuts for the graph instances of the most well known benchmark for MAX-CUT. In particular, we found a strong correlation of cluster quality and cut weights during the evolution of the clustering algorithms. Finally, since in general the maximal cut weight of a graph is not known beforehand, we derived instance-specific lower bounds for the approximation ratio, which give information of how close a solution is to the global optima for a particular instance. For the graphs in our benchmark, the instance specific lower bounds significantly exceed the Goemans–Williamson guarantee.


2011 ◽  
Vol 19 (4) ◽  
pp. 597-637 ◽  
Author(s):  
Francisco Chicano ◽  
L. Darrell Whitley ◽  
Enrique Alba

A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.


Sign in / Sign up

Export Citation Format

Share Document