approximation ratio
Recently Published Documents


TOTAL DOCUMENTS

116
(FIVE YEARS 38)

H-INDEX

11
(FIVE YEARS 2)

2022 ◽  
Vol 17 (1) ◽  
Author(s):  
Luiz Augusto G. Silva ◽  
Luis Antonio B. Kowada ◽  
Noraí Romeu Rocco ◽  
Maria Emília M. T. Walter

Abstract Background sorting by transpositions (SBT) is a classical problem in genome rearrangements. In 2012, SBT was proven to be $$\mathcal {NP}$$ NP -hard and the best approximation algorithm with a 1.375 ratio was proposed in 2006 by Elias and Hartman (EH algorithm). Their algorithm employs simplification, a technique used to transform an input permutation $$\pi$$ π into a simple permutation$${\hat{\pi }}$$ π ^ , presumably easier to handle with. The permutation $${\hat{\pi }}$$ π ^ is obtained by inserting new symbols into $$\pi$$ π in a way that the lower bound of the transposition distance of $$\pi$$ π is kept on $${\hat{\pi }}$$ π ^ . The simplification is guaranteed to keep the lower bound, not the transposition distance. A sequence of operations sorting $${\hat{\pi }}$$ π ^ can be mimicked to sort $$\pi$$ π . Results and conclusions First, using an algebraic approach, we propose a new upper bound for the transposition distance, which holds for all $$S_n$$ S n . Next, motivated by a problem identified in the EH algorithm, which causes it, in scenarios involving how the input permutation is simplified, to require one extra transposition above the 1.375-approximation ratio, we propose a new approximation algorithm to solve SBT ensuring the 1.375-approximation ratio for all $$S_n$$ S n . We implemented our algorithm and EH’s. Regarding the implementation of the EH algorithm, two other issues were identified and needed to be fixed. We tested both algorithms against all permutations of size n, $$2\le n \le 12$$ 2 ≤ n ≤ 12 . The results show that the EH algorithm exceeds the approximation ratio of 1.375 for permutations with a size greater than 7. The percentage of computed distances that are equal to transposition distance, computed by the implemented algorithms are also compared with others available in the literature. Finally, we investigate the performance of both implementations on longer permutations of maximum length 500. From the experiments, we conclude that maximum and the average distances computed by our algorithm are a little better than the ones computed by the EH algorithm and the running times of both algorithms are similar, despite the time complexity of our algorithm being higher.


Author(s):  
Rohan Ghuge ◽  
Viswanath Nagarajan

We consider the following general network design problem. The input is an asymmetric metric (V, c), root [Formula: see text], monotone submodular function [Formula: see text], and budget B. The goal is to find an r-rooted arborescence T of cost at most B that maximizes f(T). Our main result is a simple quasi-polynomial time [Formula: see text]-approximation algorithm for this problem, in which [Formula: see text] is the number of vertices in an optimal solution. As a consequence, we obtain an [Formula: see text]-approximation algorithm for directed (polymatroid) Steiner tree in quasi-polynomial time. We also extend our main result to a setting with additional length bounds at vertices, which leads to improved [Formula: see text]-approximation algorithms for the single-source buy-at-bulk and priority Steiner tree problems. For the usual directed Steiner tree problem, our result matches the best previous approximation ratio but improves significantly on the running time. For polymatroid Steiner tree and single-source buy-at-bulk, our result improves prior approximation ratios by a logarithmic factor. For directed priority Steiner tree, our result seems to be the first nontrivial approximation ratio. Under certain complexity assumptions, our approximation ratios are the best possible (up to constant factors).


2021 ◽  
Author(s):  
Nick Arnosti

This paper studies the performance of greedy matching algorithms on bipartite graphs [Formula: see text]. We focus primarily on three classical algorithms: [Formula: see text], which sequentially selects random edges from [Formula: see text]; [Formula: see text], which sequentially matches random vertices in [Formula: see text] to random neighbors; and [Formula: see text], which generates a random priority order over vertices in [Formula: see text] and then sequentially matches random vertices in [Formula: see text] to their highest-priority remaining neighbor. Prior work has focused on identifying the worst-case approximation ratio for each algorithm. This guarantee is highest for [Formula: see text] and lowest for [Formula: see text]. Our work instead studies the average performance of these algorithms when the edge set [Formula: see text] is random. Our first result compares [Formula: see text] and [Formula: see text] and shows that on average, [Formula: see text] produces more matches. This result holds for finite graphs (in contrast to previous asymptotic results) and also applies to “many to one” matching in which each vertex in [Formula: see text] can match with multiple vertices in [Formula: see text]. Our second result compares [Formula: see text] and [Formula: see text] and shows that the better worst-case guarantee of [Formula: see text] does not translate into better average performance. In “one to one” settings where each vertex in [Formula: see text] can match with only one vertex in [Formula: see text], the algorithms result in the same number of matches. When each vertex in [Formula: see text] can match with two vertices in [Formula: see text] produces more matches than [Formula: see text].


Author(s):  
Christopher Harshaw ◽  
Ehsan Kazemi ◽  
Moran Feldman ◽  
Amin Karbasi

We propose subsampling as a unified algorithmic technique for submodular maximization in centralized and online settings. The idea is simple: independently sample elements from the ground set and use simple combinatorial techniques (such as greedy or local search) on these sampled elements. We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods. In the usual off-line setting, we present SampleGreedy, which obtains a [Formula: see text]-approximation for maximizing a submodular function subject to a p-extendible system using [Formula: see text] evaluation and feasibility queries, where k is the size of the largest feasible set. The approximation ratio improves to p + 1 and p for monotone submodular and linear objectives, respectively. In the streaming setting, we present Sample-Streaming, which obtains a [Formula: see text]-approximation for maximizing a submodular function subject to a p-matchoid using O(k) memory and [Formula: see text] evaluation and feasibility queries per element, and m is the number of matroids defining the p-matchoid. The approximation ratio improves to 4p for monotone submodular objectives. We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.


Author(s):  
Chunying Ren ◽  
Dachuan Xu ◽  
Donglei Du ◽  
Min Li

Abstract In the k-means problem with penalties, we are given a data set ${\cal D} \subseteq \mathbb{R}^\ell $ of n points where each point $j \in {\cal D}$ is associated with a penalty cost p j and an integer k. The goal is to choose a set ${\rm{C}}S \subseteq {{\cal R}^\ell }$ with |CS| ≤ k and a penalized subset ${{\cal D}_p} \subseteq {\cal D}$ to minimize the sum of the total squared distance from the points in D / D p to CS and the total penalty cost of points in D p , namely $\sum\nolimits_{j \in {\cal D}\backslash {{\cal D}_p}} {d^2}(j,{\rm{C}}S) + \sum\nolimits_{j \in {{\cal D}_p}} {p_j}$ . We employ the primal-dual technique to give a pseudo-polynomial time algorithm with an approximation ratio of (6.357+ε) for the k-means problem with penalties, improving the previous best approximation ratio 19.849+∊ for this problem given by Feng et al. in Proceedings of FAW (2019).


Author(s):  
Jingyang Zhao ◽  
Mingyu Xiao

The Traveling Tournament Problem is a well-known benchmark problem in tournament timetabling, which asks us to design a schedule of home/away games of n teams (n is even) under some feasibility requirements such that the total traveling distance of all the n teams is minimized. In this paper, we study TTP-2, the traveling tournament problem where at most two consecutive home games or away games are allowed, and give an effective algorithm for n/2 being odd. Experiments on the well-known benchmark sets show that we can beat previously known solutions for all instances with n/2 being odd by an average improvement of 2.66%. Furthermore, we improve the theoretical approximation ratio from 3/2+O(1/n) to 1+O(1/n) for n/2 being odd, answering a challenging open problem in this area.


Author(s):  
Xiaowei Wu ◽  
Bo Li ◽  
Jiarui Gan

The Nash social welfare (NSW) is a well-known social welfare measurement that balances individual utilities and the overall efficiency. In the context of fair allocation of indivisible goods, it has been shown by Caragiannis et al. (EC 2016 and TEAC 2019) that an allocation maximizing the NSW is envy-free up to one good (EF1). In this paper, we are interested in the fairness of the NSW in a budget-feasible allocation problem, in which each item has a cost that will be incurred to the agent it is allocated to, and each agent has a budget constraint on the total cost of items she receives. We show that a budget-feasible allocation that maximizes the NSW achieves a 1/4-approximation of EF1 and the approximation ratio is tight. The approximation ratio improves gracefully when the items have small costs compared with the agents' budgets; it converges to 1/2 when the budget-cost ratio approaches infinity.


Author(s):  
Péter Madarasi

AbstractThis paper introduces the d-distance matching problem, in which we are given a bipartite graph $$G=(S,T;E)$$ G = ( S , T ; E ) with $$S=\{s_1,\dots ,s_n\}$$ S = { s 1 , ⋯ , s n } , a weight function on the edges and an integer $$d\in \mathbb Z_+$$ d ∈ Z + . The goal is to find a maximum-weight subset $$M\subseteq E$$ M ⊆ E of the edges satisfying the following two conditions: (i) the degree of every node of S is at most one in M, (ii) if $$s_it,s_jt\in M$$ s i t , s j t ∈ M , then $$|j-i|\ge d$$ | j - i | ≥ d . This question arises naturally, for example, in various scheduling problems. We show that the problem is NP-complete in general and admits a simple 3-approximation. We give an FPT algorithm parameterized by d and also show that the case when the size of T is constant can be solved in polynomial time. From an approximability point of view, we show that the integrality gap of the natural integer programming model is at most $$2-\frac{1}{2d-1}$$ 2 - 1 2 d - 1 , and give an LP-based approximation algorithm for the weighted case with the same guarantee. A combinatorial $$(2-\frac{1}{d})$$ ( 2 - 1 d ) -approximation algorithm is also presented. Several greedy approaches are considered, and a local search algorithm is described that achieves an approximation ratio of $$3/2+\epsilon $$ 3 / 2 + ϵ for any constant $$\epsilon >0$$ ϵ > 0 in the unweighted case. The novel approaches used in the analysis of the integrality gap and the approximation ratio of locally optimal solutions might be of independent combinatorial interest.


2021 ◽  
Author(s):  
Junsi Zhang

In this thesis, we formulate a new problem based on Max-Cut called Generalized Max-Cut. This problem requires a graph as input and two real numbers (a, b) where a > 0 and −a < b < a and outputs a number. The restriction on the pair (a, b) is to avoid trivializing the problem. We formulate a quadratic program for Generalized Max-Cut and relax it to a semi-definite program. Most algorithms in this thesis will require solving this semi-definite program. The main algorithm in this thesis is the 2-Dimensional Rounding algorithm, designed by Avidor and Zwick, with the restriction that the semi-definite program of the input graph must have 2-Dimensional solutions. This algorithm uses a factor of randomness, β ∈ [0, 1], that is dependent on the integer input to Generalized Max-Cut. We improve the performance of this algorithm by numerically finding better β.


2021 ◽  
Author(s):  
Junsi Zhang

In this thesis, we formulate a new problem based on Max-Cut called Generalized Max-Cut. This problem requires a graph as input and two real numbers (a, b) where a > 0 and −a < b < a and outputs a number. The restriction on the pair (a, b) is to avoid trivializing the problem. We formulate a quadratic program for Generalized Max-Cut and relax it to a semi-definite program. Most algorithms in this thesis will require solving this semi-definite program. The main algorithm in this thesis is the 2-Dimensional Rounding algorithm, designed by Avidor and Zwick, with the restriction that the semi-definite program of the input graph must have 2-Dimensional solutions. This algorithm uses a factor of randomness, β ∈ [0, 1], that is dependent on the integer input to Generalized Max-Cut. We improve the performance of this algorithm by numerically finding better β.


Sign in / Sign up

Export Citation Format

Share Document