approximation guarantee
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 34)

H-INDEX

5
(FIVE YEARS 3)

Author(s):  
Bruno Ordozgoiti ◽  
Ananth Mahadevan ◽  
Antonis Matakos ◽  
Aristides Gionis

AbstractWhen searching for information in a data collection, we are often interested not only in finding relevant items, but also in assembling a diverse set, so as to explore different concepts that are present in the data. This problem has been researched extensively. However, finding a set of items with minimal pairwise similarities can be computationally challenging, and most existing works striving for quality guarantees assume that item relatedness is measured by a distance function. Given the widespread use of similarity functions in many domains, we believe this to be an important gap in the literature. In this paper we study the problem of finding a diverse set of items, when item relatedness is measured by a similarity function. We formulate the diversification task using a flexible, broadly applicable minimization objective, consisting of the sum of pairwise similarities of the selected items and a relevance penalty term. To find good solutions we adopt a randomized rounding strategy, which is challenging to analyze because of the cardinality constraint present in our formulation. Even though this obstacle can be overcome using dependent rounding, we show that it is possible to obtain provably good solutions using an independent approach, which is faster, simpler to implement and completely parallelizable. Our analysis relies on a novel bound for the ratio of Poisson-Binomial densities, which is of independent interest and has potential implications for other combinatorial-optimization problems. We leverage this result to design an efficient randomized algorithm that provides a lower-order additive approximation guarantee. We validate our method using several benchmark datasets, and show that it consistently outperforms the greedy approaches that are commonly used in the literature.


2021 ◽  
Author(s):  
Eric Balkanski ◽  
Aviad Rubinstein ◽  
Yaron Singer

An Exponentially Faster Algorithm for Submodular Maximization Under a Matroid Constraint This paper studies the problem of submodular maximization under a matroid constraint. It is known since the 1970s that the greedy algorithm obtains a constant-factor approximation guarantee for this problem. Twelve years ago, a breakthrough result by Vondrák obtained the optimal 1 − 1/e approximation. Previous algorithms for this fundamental problem all have linear parallel runtime, which was considered impossible to accelerate until recently. The main contribution of this paper is a novel algorithm that provides an exponential speedup in the parallel runtime of submodular maximization under a matroid constraint, without loss in the approximation guarantee.


Author(s):  
Manmohan Singh ◽  
Rajendra Pamula ◽  
Alok Kumar

There are various applications of clustering in the fields of machine learning, data mining, data compression along with pattern recognition. The existent techniques like the Llyods algorithm (sometimes called k-means) were affected by the issue of the algorithm which converges to a local optimum along with no approximation guarantee. For overcoming these shortcomings, an efficient k-means clustering approach is offered by this paper for stream data mining. Coreset is a popular and fundamental concept for k-means clustering in stream data. In each step, reduction determines a coreset of inputs, and represents the error, where P represents number of input points according to nested property of coreset. Hence, a bit reduction in error of final coreset gets n times more accurate. Therefore, this motivated the author to propose a new coreset-reduction algorithm. The proposed algorithm executed on the Covertype dataset, Spambase dataset, Census 1990 dataset, Bigcross dataset, and Tower dataset. Our algorithm outperforms with competitive algorithms like Streamkm[Formula: see text], BICO (BIRCH meets Coresets for k-means clustering), and BIRCH (Balance Iterative Reducing and Clustering using Hierarchies.


Author(s):  
Cristina Bazgan ◽  
Stefan Ruzika ◽  
Clemens Thielen ◽  
Daniel Vanderpooten

AbstractWe determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions – both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions.


2021 ◽  
Author(s):  
Alexander M. Stroh ◽  
Alan L. Erera ◽  
Alejandro Toriello

We study tactical models for the design of same-day delivery (SDD) systems. Same-day fulfillment in e-commerce has seen substantial growth in recent years, and the underlying management of such services is complex. Although the literature includes operational models to study SDD, they tend to be detailed, complex, and computationally difficult to solve, and thus may not provide any insight into tactical SDD design variables and their impact on the average performance of the system. We propose a simplified vehicle-dispatching model that captures the “average” behavior of an SDD system from a single stocking location by utilizing continuous approximation techniques. We analyze the structure of optimal vehicle-dispatching policies given our model for two important instance families—the single-vehicle case and the case in which the delivery fleet is large—and develop techniques to find these policies that require only simple computations. We also leverage these results to analyze the case of a finite fleet, proposing a heuristic policy with a worst-case approximation guarantee. We then demonstrate with several example problem settings how this model and these policies can help answer various tactical design questions, including how to select a fleet size, determine an order cutoff time, and combine SDD and overnight order delivery operations. We validate model predictions empirically against a detailed operational model in a computational case study using geographic and Census data for the northeastern metro Atlanta region, and we demonstrate that our model predicts the average number of orders served and dispatch time to within 1%. This paper was accepted by Jay Swaminathan, operations management.


2021 ◽  
Vol 29 (3) ◽  
pp. 141-151
Author(s):  
Hiroshi Fujiwara ◽  
Ryota Adachi ◽  
Hiroaki Yamamoto

Summary. The bin packing problem is a fundamental and important optimization problem in theoretical computer science [4], [6]. An instance is a sequence of items, each being of positive size at most one. The task is to place all the items into bins so that the total size of items in each bin is at most one and the number of bins that contain at least one item is minimum. Approximation algorithms have been intensively studied. Algorithm NextFit would be the simplest one. The algorithm repeatedly does the following: If the first unprocessed item in the sequence can be placed, in terms of size, additionally to the bin into which the algorithm has placed an item the last time, place the item into that bin; otherwise place the item into an empty bin. Johnson [5] proved that the number of the resulting bins by algorithm NextFit is less than twice the number of the fewest bins that are needed to contain all items. In this article, we formalize in Mizar [1], [2] the bin packing problem as follows: An instance is a sequence of positive real numbers that are each at most one. The task is to find a function that maps the indices of the sequence to positive integers such that the sum of the subsequence for each of the inverse images is at most one and the size of the image is minimum. We then formalize algorithm NextFit, its feasibility, its approximation guarantee, and the tightness of the approximation guarantee.


Author(s):  
Chao Bian ◽  
Chao Qian ◽  
Frank Neumann ◽  
Yang Yu

Subset selection with cost constraints is a fundamental problem with various applications such as influence maximization and sensor placement. The goal is to select a subset from a ground set to maximize a monotone objective function such that a monotone cost function is upper bounded by a budget. Previous algorithms with bounded approximation guarantees include the generalized greedy algorithm, POMC and EAMC, all of which can achieve the best known approximation guarantee. In real-world scenarios, the resources often vary, i.e., the budget often changes over time, requiring the algorithms to adapt the solutions quickly. However, when the budget changes dynamically, all these three algorithms either achieve arbitrarily bad approximation guarantees, or require a long running time. In this paper, we propose a new algorithm FPOMC by combining the merits of the generalized greedy algorithm and POMC. That is, FPOMC introduces a greedy selection strategy into POMC. We prove that FPOMC can maintain the best known approximation guarantee efficiently.


Author(s):  
Xinrui Jia ◽  
Kshiteej Sheth ◽  
Ola Svensson

AbstractAn instance of colorfulk-center consists of points in a metric space that are colored red or blue, along with an integer k and a coverage requirement for each color. The goal is to find the smallest radius $$\rho $$ ρ such that there exist balls of radius $$\rho $$ ρ around k of the points that meet the coverage requirements. The motivation behind this problem is twofold. First, from fairness considerations: each color/group should receive a similar service guarantee, and second, from the algorithmic challenges it poses: this problem combines the difficulties of clustering along with the subset-sum problem. In particular, we show that this combination results in strong integrality gap lower bounds for several natural linear programming relaxations. Our main result is an efficient approximation algorithm that overcomes these difficulties to achieve an approximation guarantee of 3, nearly matching the tight approximation guarantee of 2 for the classical k-center problem which this problem generalizes. algorithms either opened more than k centers or only worked in the special case when the input points are in the plane.


Author(s):  
Felix Happach

AbstractWe consider a variant of the NP-hard problem of assigning jobs to machines to minimize the completion time of the last job. Usually, precedence constraints are given by a partial order on the set of jobs, and each job requires all its predecessors to be completed before it can start. In this paper, we consider a different type of precedence relation that has not been discussed as extensively and is called OR-precedence. In order for a job to start, we require that at least one of its predecessors is completed—in contrast to all its predecessors. Additionally, we assume that each job has a release date before which it must not start. We prove that a simple List Scheduling algorithm due to Graham (Bell Syst Tech J 45(9):1563–1581, 1966) has an approximation guarantee of 2 and show that obtaining an approximation factor of $$4/3 - \varepsilon $$ 4 / 3 - ε is NP-hard. Further, we present a polynomial-time algorithm that solves the problem to optimality if preemptions are allowed. The latter result is in contrast to classical precedence constraints where the preemptive variant is already NP-hard. Our algorithm generalizes previous results for unit processing time jobs subject to OR-precedence constraints, but without release dates. The running time of our algorithm is $$O(n^2)$$ O ( n 2 ) for arbitrary processing times and it can be reduced to O(n) for unit processing times, where n is the number of jobs. The performance guarantees presented here match the best-known ones for special cases where classical precedence constraints and OR-precedence constraints coincide.


Sign in / Sign up

Export Citation Format

Share Document