scholarly journals Search Graph Magnification in Rapid Mixing of Markov Chains Associated with the Local Search-Based Metaheuristics

Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 47
Author(s):  
Ajitha K. B. Shenoy ◽  
Smitha N. Pai

The structural property of the search graph plays an important role in the success of local search-based metaheuristic algorithms. Magnification is one of the structural properties of the search graph. This study builds the relationship between the magnification of a search graph and the mixing time of Markov Chain (MC) induced by the local search-based metaheuristics on that search space. The result shows that the ergodic reversible Markov chain induced by the local search-based metaheuristics is inversely proportional to magnification. This result indicates that it is desirable to use a search space with large magnification for the optimization problem in hand rather than using any search spaces. The performance of local search-based metaheuristics may be good on such search spaces since the mixing time of the underlying Markov chain is inversely proportional to the magnification of search space. Using these relations, this work shows that MC induced by the Metropolis Algorithm (MA) mixes rapidly if the search graph has a large magnification. This indicates that for any combinatorial optimization problem, the Markov chains associated with the MA mix rapidly i.e., in polynomial time if the underlying search graph has large magnification. The usefulness of the obtained results is illustrated using the 0/1-Knapsack Problem, which is a well-studied combinatorial optimization problem in the literature and is NP-Complete. Using the theoretical results obtained, this work shows that Markov Chains (MCs) associated with the local search-based metaheuristics like random walk and MA for 0/1-Knapsack Problem mixes rapidly.

2013 ◽  
Vol 21 (4) ◽  
pp. 625-658 ◽  
Author(s):  
Leticia Hernando ◽  
Alexander Mendiburu ◽  
Jose A. Lozano

The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.


2006 ◽  
Vol Vol. 8 ◽  
Author(s):  
R. Balasubramanian ◽  
C.R. Subramanian

International audience We study the problem of efficiently sampling k-colorings of bipartite graphs. We show that a class of markov chains cannot be used as efficient samplers. Precisely, we show that, for any k, 6 ≤ k ≤ n^\1/3-ε \, ε > 0 fixed, \emphalmost every bipartite graph on n+n vertices is such that the mixing time of any markov chain asymptotically uniform on its k-colorings is exponential in n/k^2 (if it is allowed to only change the colors of O(n/k) vertices in a single transition step). This kind of exponential time mixing is called \emphtorpid mixing. As a corollary, we show that there are (for every n) bipartite graphs on 2n vertices with Δ (G) = Ω (\ln n) such that for every k, 6 ≤ k ≤ Δ /(6 \ln Δ ), each member of a large class of chains mixes torpidly. While, for fixed k, such negative results are implied by the work of CDF, our results are more general in that they allow k to grow with n. We also show that these negative results hold true for H-colorings of bipartite graphs provided H contains a spanning complete bipartite subgraph. We also present explicit examples of colorings (k-colorings or H-colorings) which admit 1-cautious chains that are ergodic and are shown to have exponential mixing time. While, for fixed k or fixed H, such negative results are implied by the work of CDF, our results are more general in that they allow k or H to vary with n.


Author(s):  
Shaowei Cai ◽  
Chuan Luo ◽  
Haochen Zhang

Maximum Satisfiability (MaxSAT) is an important NP-hard combinatorial optimization problem with many applications and MaxSAT solving has attracted much interest. This work proposes a new incomplete approach to MaxSAT. We propose a novel decimation algorithm for MaxSAT, and then combine it with a local search algorithm. Our approach works by interleaving between the decimation algorithm and the local search algorithm, with useful information passed between them. Experiments show that our solver DeciLS achieves state of the art performance on all unweighted benchmarks from the MaxSAT Evaluation 2016. Moreover, compared to SAT-based MaxSAT solvers which dominate industrial benchmarks for years, it performs better on industrial benchmarks and significantly better on application formulas from SAT Competition. We also extend this approach to (Weighted) Partial MaxSAT, and the resulting solvers significantly improve local search solvers on crafted and industrial benchmarks, and are complementary (better on WPMS crafted benchmarks) to SAT-based solvers.


2017 ◽  
Vol Vol. 18 no. 3 (Graph Theory) ◽  
Author(s):  
Stefan Felsner ◽  
Daniel Heldt

We study Markov chains for $\alpha$-orientations of plane graphs, these are orientations where the outdegree of each vertex is prescribed by the value of a given function $\alpha$. The set of $\alpha$-orientations of a plane graph has a natural distributive lattice structure. The moves of the up-down Markov chain on this distributive lattice corresponds to reversals of directed facial cycles in the $\alpha$-orientation. We have a positive and several negative results regarding the mixing time of such Markov chains. A 2-orientation of a plane quadrangulation is an orientation where every inner vertex has outdegree 2. We show that there is a class of plane quadrangulations such that the up-down Markov chain on the 2-orientations of these quadrangulations is slowly mixing. On the other hand the chain is rapidly mixing on 2-orientations of quadrangulations with maximum degree at most 4. Regarding examples for slow mixing we also revisit the case of 3-orientations of triangulations which has been studied before by Miracle et al.. Our examples for slow mixing are simpler and have a smaller maximum degree, Finally we present the first example of a function $\alpha$ and a class of plane triangulations of constant maximum degree such that the up-down Markov chain on the $\alpha$-orientations of these graphs is slowly mixing.


2021 ◽  
Vol 6 (2) ◽  
pp. 62
Author(s):  
Made Suci Ariantini ◽  
Ayu Manik Dirgayusari

Nowadays, Scheduling subjects is one of the first steps for starting the teaching and learning process in educational institutions. To do so, The role of teachers and school staff is very important and not easy because it takes a long time to compile it. SMK PGRI 4 Denpasar is one of the schools located in the city of Denpasar which is located on Jalan Kebo Iwa No 8, Padangsambian Kaja, Denpasar, Bali. It is a vocational high school that has a tourism expertise and computer engineering study program. Based on current results of observations and interviews, the process of making the subject schedules that run at SMK PGRI 4 Denpasar is still being done using Microsoft Excel, this has resulted in frequent errors in managing schedules such as conflicting schedule and it takes a long time to correct it. Tabu Search is an optimization method based on local search, where the search process moves from one solution to the next by selecting the best solution which is not classified as a prohibited solution. It is a combinatorial optimization problem-solving method that is incorporated into local search methods. This method aims to streamline the process of finding the best solution of a large-scale (np-hard) combinatorial optimization problem. Tabu search method to optimize the process of making the subject schedule and combined using PIECES analysis (Performance, Information, Economic, Control, Efficiency, Services). From this analysis, several problems will be obtained, which in the end can be identified clearly and more specifically, so that we can conclude some suggestions that will help in designing a new and better system. The Tabu Search method can be used to optimize the process of making the subject schedules at SMK PGRI 4 Denpasar, so that the scheduling process will be more easier than using Microsoft Excel.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 423 ◽  
Author(s):  
Umberto Bartoccini ◽  
Arturo Carpi ◽  
Valentina Poggioni ◽  
Valentino Santucci

In this work, a coevolving memetic particle swarm optimization (CoMPSO) algorithm is presented. CoMPSO introduces the memetic evolution of local search operators in particle swarm optimization (PSO) continuous/discrete hybrid search spaces. The proposed solution allows one to overcome the rigidity of uniform local search strategies when applied to PSO. The key contribution is that memes provides each particle of a PSO scheme with the ability to adapt its exploration dynamics to the local characteristics of the search space landscape. The objective is obtained by an original hybrid continuous/discrete meme representation and a probabilistic co-evolving PSO scheme for discrete, continuous, or hybrid spaces. The coevolving memetic PSO evolves both the solutions and their associated memes, i.e. the local search operators. The proposed CoMPSO approach has been experimented on a standard suite of numerical optimization benchmark problems. Preliminary experimental results show that CoMPSO is competitive with respect to standard PSO and other memetic PSO schemes in literature, and its a promising starting point for further research in adaptive PSO local search operators.


2013 ◽  
Vol 30 (01) ◽  
pp. 1250045 ◽  
Author(s):  
JEFFREY J. HUNTER

The distribution of the "mixing time" or the "time to stationarity" in a discrete time irreducible Markov chain, starting in state i, can be defined as the number of trials to reach a state sampled from the stationary distribution of the Markov chain. Expressions for the probability generating function, and hence the probability distribution of the mixing time, starting in state i, are derived and special cases explored. This extends the results of the author regarding the expected time to mixing [Hunter, JJ (2006). Mixing times with applications to perturbed Markov chains. Linear Algebra and Its Applications, 417, 108–123] and the variance of the times to mixing, [Hunter, JJ (2008). Variances of first passage times in a Markov chain with applications to mixing times. Linear Algebra and Its Applications, 429, 1135–1162]. Some new results for the distribution of the recurrence and the first passage times in a general irreducible three-state Markov chain are also presented.


Author(s):  
Topi Talvitie ◽  
Teppo Niinimäki ◽  
Mikko Koivisto

We investigate almost uniform sampling from the set of linear extensions of a given partial order. The most efficient schemes stem from Markov chains whose mixing time bounds are polynomial, yet impractically large. We show that, on instances one encounters in practice, the actual mixing times can be much smaller than the worst-case bounds, and particularly so for a novel Markov chain we put forward. We circumvent the inherent hardness of estimating standard mixing times by introducing a refined notion, which admits estimation for moderate-size partial orders. Our empirical results suggest that the Markov chain approach to sample linear extensions can be made to scale well in practice, provided that the actual mixing times can be realized by instance-sensitive upper bounds or termination rules. Examples of the latter include existing perfect simulation algorithms, whose running times in our experiments follow the actual mixing times of certain chains, albeit with significant overhead.


2015 ◽  
Vol 25 (01n02) ◽  
pp. 169-231 ◽  
Author(s):  
Arvind Ayyer ◽  
Anne Schilling ◽  
Benjamin Steinberg ◽  
Nicolas M. Thiéry

We develop a general theory of Markov chains realizable as random walks on [Formula: see text]-trivial monoids. It provides explicit and simple formulas for the eigenvalues of the transition matrix, for multiplicities of the eigenvalues via Möbius inversion along a lattice, a condition for diagonalizability of the transition matrix and some techniques for bounding the mixing time. In addition, we discuss several examples, such as Toom–Tsetlin models, an exchange walk for finite Coxeter groups, as well as examples previously studied by the authors, such as nonabelian sandpile models and the promotion Markov chain on posets. Many of these examples can be viewed as random walks on quotients of free tree monoids, a new class of monoids whose combinatorics we develop.


Quantum ◽  
2018 ◽  
Vol 2 ◽  
pp. 105 ◽  
Author(s):  
Davide Orsucci ◽  
Hans J. Briegel ◽  
Vedran Dunjko

Markov chain methods are remarkably successful in computational physics, machine learning, and combinatorial optimization. The cost of such methods often reduces to the mixing time, i.e., the time required to reach the steady state of the Markov chain, which scales asδ−1, the inverse of the spectral gap. It has long been conjectured that quantum computers offer nearly generic quadratic improvements for mixing problems. However, except in special cases, quantum algorithms achieve a run-time ofO(δ−1N), which introduces a costly dependence on the Markov chain sizeN,not present in the classical case. Here, we re-address the problem of mixing of Markov chains when these form a slowly evolving sequence. This setting is akin to the simulated annealing setting and is commonly encountered in physics, material sciences and machine learning. We provide a quantum memory-efficient algorithm with a run-time ofO(δ−1N4), neglecting logarithmic terms, which is an important improvement for large state spaces. Moreover, our algorithms output quantum encodings of distributions, which has advantages over classical outputs. Finally, we discuss the run-time bounds of mixing algorithms and show that, under certain assumptions, our algorithms are optimal.


Sign in / Sign up

Export Citation Format

Share Document