A LEARNING AUTOMATA-BASED ALGORITHM TO THE STOCHASTIC MIN-DEGREE CONSTRAINED MINIMUM SPANNING TREE PROBLEM

2013 ◽  
Vol 24 (03) ◽  
pp. 329-348 ◽  
Author(s):  
JAVAD AKBARI TORKESTANI

Min-degree constrained minimum spanning tree (md-MST) problem is an NP-hard combinatorial optimization problem seeking for the minimum weight spanning tree in which the vertices are either of degree one (leaf) or at least degree d ≥ 2. md-MST problem is new to the literature and very few studies have been conducted on this problem in deterministic graph. md-MST problem has several appealing real-world applications. Though in realistic applications the graph conditions and parameters are stochastic and vary with time, to the best of our knowledge no work has been done on solving md-MST problem in stochastic graph. This paper proposes a decentralized learning automata-based algorithm for finding a near optimal solution to the md-MST problem in stochastic graph. In this work, it is assumed that the weight associated with the graph edge is random variable with a priori unknown probability distribution. This assumption makes the md-MST problem incredibly harder to solve. The proposed algorithm exploits an intelligent sampling technique avoiding the unnecessary samples by focusing on the edges of the min-degree spanning tree with the minimum expected weight. On the basis of the Martingale theorem, the convergence of the proposed algorithm to the optimal solution is theoretically proven. Extensive simulation experiments are performed on the stochastic graph instances to show the performance of the proposed algorithm. The obtained results are compared with those of the standard sampling method in terms of the sampling rate and solution optimality. Simulation experiments show that the proposed method outperforms the standard sampling method.

2015 ◽  
Vol 57 (2) ◽  
pp. 166-174 ◽  
Author(s):  
H. CHARKHGARD ◽  
M. SAVELSBERGH

We investigate two routing problems that arise when order pickers traverse an aisle in a warehouse. The routing problems can be viewed as Euclidean travelling salesman problems with points on two parallel lines. We show that if the order picker traverses only a section of the aisle and then returns, then an optimal solution can be found in linear time, and if the order picker traverses the entire aisle, then an optimal solution can be found in quadratic time. Moreover, we show how to approximate the routing cost in linear time by computing a minimum spanning tree for the points on the parallel lines.


2014 ◽  
Vol 886 ◽  
pp. 593-597 ◽  
Author(s):  
Wei Gong ◽  
Mei Li

Traveling Salesman Problem (Min TSP) is contained in the problem class NPO. It is NP-hard, means there is no efficient way to solve it. People have tried many kinds of algorithms with information technology. Thus in this paper we compare four heuristics, they are nearest neighbor, random insertion, minimum spanning tree and heuristics of Christofides. We dont try to find an optimal solution. We try to find approximated short trips via these heuristics and compare them.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Saeedeh Pourahmad ◽  
Atefeh Basirat ◽  
Amir Rahimi ◽  
Marziyeh Doostfatemeh

Random selection of initial centroids (centers) for clusters is a fundamental defect in K-means clustering algorithm as the algorithm’s performance depends on initial centroids and may end up in local optimizations. Various hybrid methods have been introduced to resolve this defect in K-means clustering algorithm. As regards, there are no comparative studies comparing these methods in various aspects, the present paper compared three hybrid methods with K-means clustering algorithm using concepts of genetic algorithm, minimum spanning tree, and hierarchical clustering method. Although these three hybrid methods have received more attention in previous researches, fewer studies have compared their results. Hence, seven quantitative datasets with different characteristics in terms of sample size, number of features, and number of different classes are utilized in present study. Eleven indices of external and internal evaluating index were also considered for comparing the methods. Data indicated that the hybrid methods resulted in higher convergence rate in obtaining the final solution than the ordinary K-means method. Furthermore, the hybrid method with hierarchical clustering algorithm converges to the optimal solution with less iteration than the other two hybrid methods. However, hybrid methods with minimal spanning trees and genetic algorithms may not always or often be more effective than the ordinary K-means method. Therefore, despite the computational complexity, these three hybrid methods have not led to much improvement in the K-means method. However, a simulation study is required to compare the methods and complete the conclusion.


10.37236/8092 ◽  
2019 ◽  
Vol 26 (4) ◽  
Author(s):  
Colin Cooper ◽  
Alan Frieze ◽  
Wesley Pegden

We study the rank of a random $n \times m$ matrix $\mathbf{A}_{n,m;k}$ with entries from $GF(2)$, and exactly $k$ unit entries in each column, the other entries being zero. The columns are chosen independently and uniformly at random from the set of all ${n \choose k}$ such columns. We obtain an asymptotically correct estimate for the rank as a function of the number of columns $m$ in terms of $c,n,k$, and where $m=cn/k$. The matrix $\mathbf{A}_{n,m;k}$ forms the vertex-edge incidence matrix of a $k$-uniform random hypergraph $H$. The rank of $\mathbf{A}_{n,m;k}$ can be expressed as follows. Let $|C_2|$ be the number of vertices of the 2-core of $H$, and $|E(C_2)|$ the number of edges. Let $m^*$ be the value of $m$ for which $|C_2|= |E(C_2)|$. Then w.h.p. for $m<m^*$ the rank of $\mathbf{A}_{n,m;k}$ is asymptotic to $m$, and for $m \ge m^*$ the rank is asymptotic to $m-|E(C_2)|+|C_2|$. In addition, assign i.i.d. $U[0,1]$ weights $X_i, i \in {1,2,...m}$ to the columns, and define the weight of a set of columns $S$ as $X(S)=\sum_{j \in S} X_j$. Define a basis as a set of $n-𝟙 (k\text{ even})$ linearly independent columns. We obtain an asymptotically correct estimate for the minimum weight basis. This generalises the well-known result of Frieze [On the value of a random minimum spanning tree problem, Discrete Applied Mathematics, (1985)] that, for $k=2$,   the expected length of a minimum weight spanning tree tends to $\zeta(3)\sim 1.202$.


2006 ◽  
Vol DMTCS Proceedings vol. AG,... (Proceedings) ◽  
Author(s):  
Louigi Addario-Berry ◽  
Nicolas Broutin ◽  
Bruce Reed

International audience Let $X_1,\ldots,X_{n\choose 2}$ be independent identically distributed weights for the edges of $K_n$. If $X_i \neq X_j$ for$ i \neq j$, then there exists a unique minimum weight spanning tree $T$ of $K_n$ with these edge weights. We show that the expected diameter of $T$ is $Θ (n^{1/3})$. This settles a question of [Frieze97].


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Xuemei Sun ◽  
Cheng Chang ◽  
Hua Su ◽  
Chuitian Rong

Degree constrained minimum spanning tree (DCMST) refers to constructing a spanning tree of minimum weight in a complete graph with weights on edges while the degree of each node in the spanning tree is no more thand(d≥ 2). The paper proposes an improved multicolony ant algorithm for degree constrained minimum spanning tree searching which enables independent search for optimal solutions among various colonies and achieving information exchanges between different colonies by information entropy. Local optimal algorithm is introduced to improve constructed spanning tree. Meanwhile, algorithm strategies in dynamic ant, random perturbations ant colony, and max-min ant system are adapted in this paper to optimize the proposed algorithm. Finally, multiple groups of experimental data show the superiority of the improved algorithm in solving the problems of degree constrained minimum spanning tree.


2018 ◽  
Vol 29 (04) ◽  
pp. 505-527
Author(s):  
Maria Paola Bianchi ◽  
Hans-Joachim Böckenhauer ◽  
Tatjana Brülisauer ◽  
Dennis Komm ◽  
Beatrice Palano

In the online minimum spanning tree problem, a graph is revealed vertex by vertex; together with every vertex, all edges to vertices that are already known are given, and an online algorithm must irrevocably choose a subset of them as a part of its solution. The advice complexity of an online problem is a means to quantify the information that needs to be extracted from the input to achieve good results. For a graph of size [Formula: see text], we show an asymptotically tight bound of [Formula: see text] on the number of advice bits to produce an optimal solution for any given graph. For particular graph classes, e.g., with bounded degree or a restricted edge weight function, we prove that the upper bound can be drastically reduced; e.g., [Formula: see text] advice bits allow to compute an optimal result if the weight function equals the Euclidean distance; if the graph is complete and has two different edge weights, even a logarithmic number suffices. Some of these results make use of the optimality of Kruskal’s algorithm for the offline setting. We also study the trade-off between the number of advice bits and the achievable competitive ratio. To this end, we perform a reduction from another online problem to obtain a linear lower bound on the advice complexity for any near-optimal solution. Using our results finally allows us to give a lower bound on the expected competitive ratio of any randomized online algorithm for the problem, even on graphs with three different edge weights.


Sign in / Sign up

Export Citation Format

Share Document