max cut problem
Recently Published Documents


TOTAL DOCUMENTS

100
(FIVE YEARS 19)

H-INDEX

16
(FIVE YEARS 2)

Author(s):  
Ivan Sergienko ◽  
Vladimir Shylo ◽  
Valentyna Roshchyn ◽  
Petro Shylo

Introduction. Solving large-scale discrete optimization problems requires the processing of large-scale data in a reasonable time. Efficient solving is only possible by using multiprocessor computer systems. However, it is a daunting challenge to adapt existing optimization algorithms to get all the benefits of these parallel computing systems. The available computational resources are ineffective without efficient and scalable parallel methods. In this connection, the algorithm unions (portfolios and teams) play a crucial role in the parallel processing of discrete optimization problems. The purpose. The purpose of this paper is to research the efficiency of the algorithm portfolios by solving the weighted max-cut problem. The research is carried out in two stages using stochastic local search algorithms. Results. In this paper, we investigate homogeneous and non-homogeneous algorithm portfolios. We developed the homogeneous portfolios of two stochastic local optimization algorithms for the weighted max-cut problem, which has numerous applications. The results confirm the advantages of the proposed methods. Conclusions. Algorithm portfolios could be used to solve well-known discrete optimization problems of unprecedented scale and significantly improve their solving time. Further, we propose using communication between algorithms, namely teams and portfolios of algorithm teams. The algorithms in a team communicate with each other to boost overall performance. It is supposed that algorithm communication allows enhancing the best features of the developed algorithms and would improve the computational times and solution quality. The underlying algorithms should be able to utilize relevant data that is being communicated effectively to achieve any computational benefit from communication. Keywords: Discrete optimization, algorithm portfolios, computational experiment.


Author(s):  
Heber F. Amaral ◽  
Sebastián Urrutia ◽  
Lars M. Hvattum

AbstractLocal search is a fundamental tool in the development of heuristic algorithms. A neighborhood operator takes a current solution and returns a set of similar solutions, denoted as neighbors. In best improvement local search, the best of the neighboring solutions replaces the current solution in each iteration. On the other hand, in first improvement local search, the neighborhood is only explored until any improving solution is found, which then replaces the current solution. In this work we propose a new strategy for local search that attempts to avoid low-quality local optima by selecting in each iteration the improving neighbor that has the fewest possible attributes in common with local optima. To this end, it uses inequalities previously used as optimality cuts in the context of integer linear programming. The novel method, referred to as delayed improvement local search, is implemented and evaluated using the travelling salesman problem with the 2-opt neighborhood and the max-cut problem with the 1-flip neighborhood as test cases. Computational results show that the new strategy, while slower, obtains better local optima compared to the traditional local search strategies. The comparison is favourable to the new strategy in experiments with fixed computation time or with a fixed target.


2021 ◽  
Author(s):  
Wenjia Zhang ◽  
Wencheng Sun ◽  
Yuanyuan Liu ◽  
Qingwen Liu ◽  
Jiangbing Du ◽  
...  

Abstract The mining in physics and biology for accelerating the hardcore algorithm to solve non-deterministic polynomial (NP) hard problems has inspired a great amount of special-purpose ma-chine models. Ising machine has become an efficient solver for various combinatorial optimization problems. As a computing accelerator, large-scale photonic spatial Ising machine have great advantages and potentials due to excellent scalability and compact system. However, current fundamental limitation of photonic spatial Ising machine is the configuration flexibility of problem implementation in the accelerator model. Arbitrary spin interaction is highly desired for solving various NP hard problems. Moreover, the absence of external magnetic field in the proposed photonic Ising machine will further narrow the freedom to map the optimization applications. In this paper, we propose a novel quadrature photonic spatial Ising machine to break through the limitation of photonic Ising accelerator by synchronous phase manipulation in two and three sections. Max-cut problem solution with graph order of 100 and density from 0.5 to 1 is experimentally demonstrated after almost 100 iterations. We derive and verify using simulation the solution for Max-cut problem with more than 1600 nodes and the system tolerance for light misalignment. Moreover, vertex cover problem, modeled as an Ising model with external magnetic field, has been successfully implemented to achieve the optimal solution. Our work suggests flexible problem solution by large-scale photonic spatial Ising machine.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Hirofumi Nishi ◽  
Taichi Kosugi ◽  
Yu-ichiro Matsushita

AbstractThe imaginary-time evolution method is a well-known approach used for obtaining the ground state in quantum many-body problems on a classical computer. A recently proposed quantum imaginary-time evolution method (QITE) faces problems of deep circuit depth and difficulty in the implementation on noisy intermediate-scale quantum (NISQ) devices. In this study, a nonlocal approximation is developed to tackle this difficulty. We found that by removing the locality condition or local approximation (LA), which was imposed when the imaginary-time evolution operator is converted to a unitary operator, the quantum circuit depth is significantly reduced. We propose two-step approximation methods based on a nonlocality condition: extended LA (eLA) and nonlocal approximation (NLA). To confirm the validity of eLA and NLA, we apply them to the max-cut problem of an unweighted 3-regular graph and a weighted fully connected graph; we comparatively evaluate the performances of LA, eLA, and NLA. The eLA and NLA methods require far fewer circuit depths than LA to maintain the same level of computational accuracy. Further, we developed a “compression” method of the quantum circuit for the imaginary-time steps to further reduce the circuit depth in the QITE method. The eLA, NLA, and compression methods introduced in this study allow us to reduce the circuit depth and the accumulation of error caused by the gate operation significantly and pave the way for implementing the QITE method on NISQ devices.


Quantum ◽  
2021 ◽  
Vol 5 ◽  
pp. 454
Author(s):  
Benjamin Tan ◽  
Marc-Antoine Lemonde ◽  
Supanut Thanasilp ◽  
Jirawat Tangpanitanon ◽  
Dimitris G. Angelakis

We propose and analyze a set of variational quantum algorithms for solving quadratic unconstrained binary optimization problems where a problem consisting of nc classical variables can be implemented on O(log⁡nc) number of qubits. The underlying encoding scheme allows for a systematic increase in correlations among the classical variables captured by a variational quantum state by progressively increasing the number of qubits involved. We first examine the simplest limit where all correlations are neglected, i.e. when the quantum state can only describe statistically independent classical variables. We apply this minimal encoding to find approximate solutions of a general problem instance comprised of 64 classical variables using 7 qubits. Next, we show how two-body correlations between the classical variables can be incorporated in the variational quantum state and how it can improve the quality of the approximate solutions. We give an example by solving a 42-variable Max-Cut problem using only 8 qubits where we exploit the specific topology of the problem. We analyze whether these cases can be optimized efficiently given the limited resources available in state-of-the-art quantum platforms. Lastly, we present the general framework for extending the expressibility of the probability distribution to any multi-body correlations.


2021 ◽  
Author(s):  
Mohamad Mahdi Mohades ◽  
Mohammad Hossein Kahaei

<p>The max-cut problem addresses the problem of finding a cut for a graph that splits the graph into two subsets of vertices so that the number of edges between these two subsets is as large as possible. However, this problem is NP-Hard, which may be solved by suboptimal algorithms. In this paper, we propose a fast and accurate Riemannian optimization algorithm for solving the max-cut problem. To do so, we develop a gradient descent algorithm and prove its convergence. Our simulation results show that the proposed method is extremely efficient on some already-investigated graphs. Specifically, our method is on average 50 times faster than the best well-known techniques with slightly losing the performance, which is on average 0.9729 of the max-cut value of the others.</p> <p></p>


2021 ◽  
Author(s):  
Mohamad Mahdi Mohades ◽  
Mohammad Hossein Kahaei

<p>The max-cut problem addresses the problem of finding a cut for a graph that splits the graph into two subsets of vertices so that the number of edges between these two subsets is as large as possible. However, this problem is NP-Hard, which may be solved by suboptimal algorithms. In this paper, we propose a fast and accurate Riemannian optimization algorithm for solving the max-cut problem. To do so, we develop a gradient descent algorithm and prove its convergence. Our simulation results show that the proposed method is extremely efficient on some already-investigated graphs. Specifically, our method is on average 50 times faster than the best well-known techniques with slightly losing the performance, which is on average 0.9729 of the max-cut value of the others.</p> <p></p>


2021 ◽  
Vol 852 ◽  
pp. 172-184
Author(s):  
Christine Dahn ◽  
Nils M. Kriege ◽  
Petra Mutzel ◽  
Julian Schilling

2020 ◽  
Author(s):  
Saavan Patel ◽  
Lili Chen ◽  
Philip Canoza ◽  
Sayeef Salahuddin

Abstract In this work we demonstrate usage of the Restricted Boltzmann Machine (RBM) as a stochastic neural network capable of solving NP-Hard Combinatorial Optimization problems efficiently. By mapping the RBM onto a reconfigurable Field Programmable Gate Array (FPGA), we can effectively hardware accelerate the RBM's stochastic sampling algorithm. We benchmark the RBM against the DWave 2000Q Quantum Adiabatic Computer and the Optical Coherent Ising Machine on two such optimization problems: the MAX-CUT problem and the Sherrington-Kirkpatrick (SK) spin glass. The hardware accelerated RBM shows asymptotic scaling either similar or better than these other accelerators. This leads to 107x and 105x time to solution improvement compared to the DWave 2000Q on the MAX-CUT and SK problems respectively, along with a 150x and 1000x improvement compared to the Coherent Ising Machine annealer on those problems. By utilizing commodity hardware running at room temperature, the RBM shows potential for immediate and scalable use.


Sign in / Sign up

Export Citation Format

Share Document