A polynomial-time algorithm for solving NP-hard problems in practice

2003 ◽  
Vol 34 (1) ◽  
pp. 101-108 ◽  
Author(s):  
Xiaofei Huang
2009 ◽  
Vol 19 (1) ◽  
pp. 3-40 ◽  
Author(s):  
Vangelis Paschos

The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.


10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


2012 ◽  
Vol 21 (5) ◽  
pp. 643-660 ◽  
Author(s):  
YONATAN BILU ◽  
NATHAN LINIAL

We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP-hard problems are easier to solve, and in particular, whether there exist algorithms that solve in polynomial time all sufficiently stable instances of some NP-hard problem. The paper focuses on the Max-Cut problem, for which we show that this is indeed the case.


2014 ◽  
Vol 24 (03) ◽  
pp. 225-236 ◽  
Author(s):  
DAVID KIRKPATRICK ◽  
BOTING YANG ◽  
SANDRA ZILLES

Given an arrangement A of n sensors and two points s and t in the plane, the barrier resilience of A with respect to s and t is the minimum number of sensors whose removal permits a path from s to t such that the path does not intersect the coverage region of any sensor in A. When the surveillance domain is the entire plane and sensor coverage regions are unit line segments, even with restricted orientations, the problem of determining the barrier resilience is known to be NP-hard. On the other hand, if sensor coverage regions are arbitrary lines, the problem has a trivial linear time solution. In this paper, we study the case where each sensor coverage region is an arbitrary ray, and give an O(n2m) time algorithm for computing the barrier resilience when there are m ⩾ 1 sensor intersections.


2020 ◽  
Vol 34 (02) ◽  
pp. 2070-2078
Author(s):  
Yasushi Kawase ◽  
Hanna Sumita

We study the problem of fairly allocating a set of indivisible goods to risk-neutral agents in a stochastic setting. We propose an (approximation) algorithm to find a stochastic allocation that maximizes the minimum utility among the agents. The algorithm runs by repeatedly finding an (approximate) allocation to maximize the total virtual utility of the agents. This implies that the problem is solvable in polynomial time when the utilities are gross-substitutes (which is a subclass of submodular). When the utilities are submodular, we can find a (1 − 1/e)-approximate solution for the problem and this is best possible unless P=NP. We also extend the problem where a stochastic allocation must satisfy the (ex ante) envy-freeness. Under this condition, we demonstrate that the problem is NP-hard even when every agent has an additive utility with a matroid constraint (which is a subclass of gross-substitutes). Furthermore, we propose a polynomial-time algorithm for the setting with a restriction that the matroid constraint is common to all agents.


2007 ◽  
Vol Vol. 9 no. 1 (Graph and Algorithms) ◽  
Author(s):  
Jan Kára ◽  
Jan Kratochvil ◽  
David R. Wood

Graphs and Algorithms International audience We consider the problem of finding a balanced ordering of the vertices of a graph. More precisely, we want to minimise the sum, taken over all vertices v, of the difference between the number of neighbours to the left and right of v. This problem, which has applications in graph drawing, was recently introduced by Biedl et al. [Discrete Applied Math. 148:27―48, 2005]. They proved that the problem is solvable in polynomial time for graphs with maximum degree three, but NP-hard for graphs with maximum degree six. One of our main results is to close the gap in these results, by proving NP-hardness for graphs with maximum degree four. Furthermore, we prove that the problem remains NP-hard for planar graphs with maximum degree four and for 5-regular graphs. On the other hand, we introduce a polynomial time algorithm that determines whetherthere is a vertex ordering with total imbalance smaller than a fixed constant, and a polynomial time algorithm that determines whether a given multigraph with even degrees has an 'almost balanced' ordering.


2021 ◽  
Vol 13 (4) ◽  
pp. 1-24
Author(s):  
Jessica Chen ◽  
Henry Milner ◽  
Ion Stoica ◽  
Jibin Zhan

The HTTP adaptive streaming technique opened the door to cope with the fluctuating network conditions during the streaming process by dynamically adjusting the volume of the future chunks to be downloaded. The bitrate selection in this adjustment inevitably involves the task of predicting the future throughput of a video session, owing to which various heuristic solutions have been explored. The ultimate goal of the present work is to explore the theoretical upper bounds of the QoE that any ABR algorithm can possibly reach, therefore providing an essential step to benchmarking the performance evaluation of ABR algorithms. In our setting, the QoE is defined in terms of a linear combination of the average perceptual quality and the buffering ratio. The optimization problem is proven to be NP-hard when the perceptual quality is defined by chunk size and conditions are given under which the problem becomes polynomially solvable. Enriched by a global lower bound, a pseudo-polynomial time algorithm along the dynamic programming approach is presented. When the minimum buffering is given higher priority over higher perceptual quality, the problem is shown to be also NP-hard, and the above algorithm is simplified and enhanced by a sequence of lower bounds on the completion time of chunk downloading, which, according to our experiment, brings a 36.0% performance improvement in terms of computation time. To handle large amounts of data more efficiently, a polynomial-time algorithm is also introduced to approximate the optimal values when minimum buffering is prioritized. Besides its performance guarantee, this algorithm is shown to reach 99.938% close to the optimal results, while taking only 0.024% of the computation time compared to the exact algorithm in dynamic programming.


1996 ◽  
Vol 07 (01) ◽  
pp. 23-41
Author(s):  
MARTIN FÜRER ◽  
WEBB MILLER

An alignment of k given sequences is a k-rowed matrix frequently used by molecular biologists to display correspondences between entries from each sequence. Under one approach, an alignment is represented by a matrix of ‘x’ and ’-’ characters, where each x in row r indicates the position of an entry of sequence r. It is sometimes efficient to store only the run-length encoding of each row of this bit-matrix. A natural class of commands for editing one such row into another consists of operations of the form: “Move the d dashes that begin at position i of row r to position j of that row,” for relevant values of r, d, i and j. We show that the problem of determining a shortest sequence of such operations that converts one given alignment to another is NP-hard and give a polynomial-time algorithm that always comes within a factor 5/4 of optimality. An application of these ideas to alignments of long DNA sequences is discussed.


Sign in / Sign up

Export Citation Format

Share Document