Complexity, computational

Author(s):  
Alasdair Urquhart

The theory of computational complexity is concerned with estimating the resources a computer needs to solve a given problem. The basic resources are time (number of steps executed) and space (amount of memory used). There are problems in logic, algebra and combinatorial games that are solvable in principle by a computer, but computationally intractable because the resources required by relatively small instances are practically infeasible. The theory of NP-completeness concerns a common type of problem in which a solution is easy to check but may be hard to find. Such problems belong to the class NP; the hardest ones of this type are the NP-complete problems. The problem of determining whether a formula of propositional logic is satisfiable or not is NP-complete. The class of problems with feasible solutions is commonly identified with the class P of problems solvable in polynomial time. Assuming this identification, the conjecture that some NP problems require infeasibly long times for their solution is equivalent to the conjecture that P≠NP. Although the conjecture remains open, it is widely believed that NP-complete problems are computationally intractable.

Author(s):  
Rodolfo A.Pazos R. ◽  
Ernesto Ong C. ◽  
Héctor Fraire H. ◽  
Laura Cruz R. ◽  
José A.Martínez F.

The theory of NP-completeness provides a method for telling whether a decision/optimization problem is “easy” (i.e., it belongs to the P class) or “difficult” (i.e., it belongs to the NP-complete class). Many problems related to logistics have been proven to belong to the NP-complete class such as Bin Packing, job scheduling, timetabling, etc. The theory predicts that for any pair of NP-complete problems A and B there must exist a polynomial time transformation from A to B and also a reverse transformation (from B to A). However, for many pairs of NP-complete problems no reverse transformation has been reported in the literature; thus the following question arises: do reverse transformations exist for any pair of NP-complete problems? This chapter presents results on an ongoing investigation for clarifying this issue.


2018 ◽  
Vol 27 (5) ◽  
pp. 808-828 ◽  
Author(s):  
LEONID A. LEVIN ◽  
RAMARATHNAM VENKATESAN

NP-complete problems should be hard on some instances but those may be extremely rare. On generic instances many such problems, especially related to random graphs, have been proved to be easy. We show the intractability of random instances of a graph colouring problem: this graph problem is hard on average unless all NP problems under all samplable (i.e. generatable in polynomial time) distributions are easy. Worst case reductions use special gadgets and typically map instances into a negligible fraction of possible outputs. Ours must output nearly random graphs and avoid any super-polynomial distortion of probabilities. This poses significant technical difficulties.


1980 ◽  
Vol 3 (3) ◽  
pp. 397-400
Author(s):  
Martti Penttonen

Most NP-complete problems remain NP-complete even though the notation for integers is changed to unary. The knapsack problem is an exception, it becomes provably polynomial time recognizable. However, we present a modified knapsack problem that remains NP-complete also in unary notation.


2005 ◽  
Vol 03 (02) ◽  
pp. 207-223
Author(s):  
MARK CIELIEBAK ◽  
STEPHAN EIDENBENZ ◽  
GERHARD J. WOEGINGER

We revisit the DOUBLE DIGEST problem, which occurs in sequencing of large DNA strings and consists of reconstructing the relative positions of cut sites from two different enzymes. We first show that DOUBLE DIGEST is strongly NP-complete, improving upon previous results that only showed weak NP-completeness. Even the (experimentally more meaningful) variation in which we disallow coincident cut sites turns out to be strongly NP-complete. In the second part, we model errors in data as they occur in real-life experiments: we propose several optimization variations of DOUBLE DIGEST that model partial cleavage errors. We then show that most of these variations are hard to approximate. In the third part, we investigate variations with the additional restriction that coincident cut sites are disallowed, and we show that it is NP-hard to even find feasible solutions in this case, thus making it impossible to guarantee any approximation ratio at all.


2003 ◽  
Vol 10 (17) ◽  
Author(s):  
Luca Aceto ◽  
Jens Alsted Hansen ◽  
Anna Ingólfsdóttir ◽  
Jacob Johnsen ◽  
John Knudsen

Consistency checking is a fundamental computational problem in genetics. Given a pedigree and information on the genotypes (of some) of the individuals in it, the aim of consistency checking is to determine whether these data are consistent with the classic Mendelian laws of inheritance. This problem arose originally from the geneticists' need to filter their input data from erroneous information, and is well motivated from both a biological and a sociological viewpoint. This paper shows that consistency checking is NP-complete, even with focus on a single gene and in the presence of three alleles. Several other results on the computational complexity of problems from genetics that are related to consistency checking are also offered. In particular, it is shown that checking the consistency of pedigrees over two alleles, and of pedigrees without loops, can be done in polynomial time.


Author(s):  
Thomas Bläsius ◽  
Philipp Fischbeck ◽  
Tobias Friedrich ◽  
Maximilian Katzmann

AbstractThe computational complexity of the VertexCover problem has been studied extensively. Most notably, it is NP-complete to find an optimal solution and typically NP-hard to find an approximation with reasonable factors. In contrast, recent experiments suggest that on many real-world networks the run time to solve VertexCover is way smaller than even the best known FPT-approaches can explain. We link these observations to two properties that are observed in many real-world networks, namely a heterogeneous degree distribution and high clustering. To formalize these properties and explain the observed behavior, we analyze how a branch-and-reduce algorithm performs on hyperbolic random graphs, which have become increasingly popular for modeling real-world networks. In fact, we are able to show that the VertexCover problem on hyperbolic random graphs can be solved in polynomial time, with high probability. The proof relies on interesting structural properties of hyperbolic random graphs. Since these predictions of the model are interesting in their own right, we conducted experiments on real-world networks showing that these properties are also observed in practice.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
David Orellana-Martín ◽  
Luis Valencia-Cabrera ◽  
Bosheng Song ◽  
Linqiang Pan ◽  
Mario J. Pérez-Jiménez

Over the last few years, a new methodology to address the P versus NP problem has been developed, based on searching for borderlines between the nonefficiency of computing models (only problems in class P can be solved in polynomial time) and the presumed efficiency (ability to solve NP-complete problems in polynomial time). These borderlines can be seen as frontiers of efficiency, which are crucial in this methodology. “Translating,” in some sense, an efficient solution in a presumably efficient model to an efficient solution in a nonefficient model would give an affirmative answer to problem P versus NP. In the framework of Membrane Computing, the key of this approach is to detect the syntactic or semantic ingredients that are needed to pass from a nonefficient class of membrane systems to a presumably efficient one. This paper deals with tissue P systems with communication rules of type symport/antiport allowing the evolution of the objects triggering the rules. In previous works, frontiers of efficiency were found in these kinds of membrane systems both with division rules and with separation rules. However, since they were not optimal, it is interesting to refine these frontiers. In this work, optimal frontiers of the efficiency are obtained in terms of the total number of objects involved in the communication rules used for that kind of membrane systems. These optimizations could be easier to translate, if possible, to efficient solutions in a nonefficient model.


2015 ◽  
Vol 25 (04) ◽  
pp. 283-298
Author(s):  
Oswin Aichholzer ◽  
Franz Aurenhammer ◽  
Thomas Hackl ◽  
Clemens Huemer ◽  
Alexander Pilz ◽  
...  

Deciding 3-colorability for general plane graphs is known to be an NP-complete problem. However, for certain families of graphs, like triangulations, polynomial time algorithms exist. We consider the family of pseudo-triangulations, which are a generalization of triangulations, and prove NP-completeness for this class. This result also holds if we bound their face degree to four, or exclusively consider pointed pseudo-triangulations with maximum face degree five. In contrast to these completeness results, we show that pointed pseudo-triangulations with maximum face degree four are always 3-colorable. An according 3-coloring can be found in linear time. Some complexity results relating to the rank of pseudo-triangulations are also given.


2008 ◽  
Vol 15 (02) ◽  
pp. 173-187 ◽  
Author(s):  
Satoshi Iriyama ◽  
Masanori Ohya

For SAT problem, which is known to be NP-complete, Ohya and Masuda found a quantum algorithm calculating a given boolean function for all truth assignments. They showed that we can decide whether this boolean function is satisfiable or not in polynomial time if a certain superposed state can be detected physically, which turns out to be related to an amplification process. Then Ohya and Volovich realized this amplification by means of chaos dynamics [4, 5]. In this paper, we study the complexity of the SAT algorithm by rigorously counting the steps of OMV algorithm discussed previously in [1, 2, 4, 5].


Sign in / Sign up

Export Citation Format

Share Document