COMPLEXITY AND APPROXIMABILITY OF DOUBLE DIGEST

2005 ◽  
Vol 03 (02) ◽  
pp. 207-223
Author(s):  
MARK CIELIEBAK ◽  
STEPHAN EIDENBENZ ◽  
GERHARD J. WOEGINGER

We revisit the DOUBLE DIGEST problem, which occurs in sequencing of large DNA strings and consists of reconstructing the relative positions of cut sites from two different enzymes. We first show that DOUBLE DIGEST is strongly NP-complete, improving upon previous results that only showed weak NP-completeness. Even the (experimentally more meaningful) variation in which we disallow coincident cut sites turns out to be strongly NP-complete. In the second part, we model errors in data as they occur in real-life experiments: we propose several optimization variations of DOUBLE DIGEST that model partial cleavage errors. We then show that most of these variations are hard to approximate. In the third part, we investigate variations with the additional restriction that coincident cut sites are disallowed, and we show that it is NP-hard to even find feasible solutions in this case, thus making it impossible to guarantee any approximation ratio at all.

10.37236/1799 ◽  
2004 ◽  
Vol 11 (1) ◽  
Author(s):  
Alastair Farrugia

Can the vertices of an arbitrary graph $G$ be partitioned into $A \cup B$, so that $G[A]$ is a line-graph and $G[B]$ is a forest? Can $G$ be partitioned into a planar graph and a perfect graph? The NP-completeness of these problems are special cases of our result: if ${\cal P}$ and ${\cal Q}$ are additive induced-hereditary graph properties, then $({\cal P}, {\cal Q})$-colouring is NP-hard, with the sole exception of graph $2$-colouring (the case where both ${\cal P}$ and ${\cal Q}$ are the set ${\cal O}$ of finite edgeless graphs). Moreover, $({\cal P}, {\cal Q})$-colouring is NP-complete iff ${\cal P}$- and ${\cal Q}$-recognition are both in NP. This completes the proof of a conjecture of Kratochvíl and Schiermeyer, various authors having already settled many sub-cases.


Author(s):  
Alasdair Urquhart

The theory of computational complexity is concerned with estimating the resources a computer needs to solve a given problem. The basic resources are time (number of steps executed) and space (amount of memory used). There are problems in logic, algebra and combinatorial games that are solvable in principle by a computer, but computationally intractable because the resources required by relatively small instances are practically infeasible. The theory of NP-completeness concerns a common type of problem in which a solution is easy to check but may be hard to find. Such problems belong to the class NP; the hardest ones of this type are the NP-complete problems. The problem of determining whether a formula of propositional logic is satisfiable or not is NP-complete. The class of problems with feasible solutions is commonly identified with the class P of problems solvable in polynomial time. Assuming this identification, the conjecture that some NP problems require infeasibly long times for their solution is equivalent to the conjecture that P≠NP. Although the conjecture remains open, it is widely believed that NP-complete problems are computationally intractable.


2021 ◽  
Vol 13 (2) ◽  
pp. 1-20
Author(s):  
Sushmita Gupta ◽  
Pranabendu Misra ◽  
Saket Saurabh ◽  
Meirav Zehavi

An input to the P OPULAR M ATCHING problem, in the roommates setting (as opposed to the marriage setting), consists of a graph G (not necessarily bipartite) where each vertex ranks its neighbors in strict order, known as its preference. In the P OPULAR M ATCHING problem the objective is to test whether there exists a matching M * such that there is no matching M where more vertices prefer their matched status in M (in terms of their preferences) over their matched status in M *. In this article, we settle the computational complexity of the P OPULAR M ATCHING problem in the roommates setting by showing that the problem is NP-complete. Thus, we resolve an open question that has been repeatedly and explicitly asked over the last decade.


Author(s):  
Marlene Arangú ◽  
Miguel Salido

A fine-grained arc-consistency algorithm for non-normalized constraint satisfaction problems Constraint programming is a powerful software technology for solving numerous real-life problems. Many of these problems can be modeled as Constraint Satisfaction Problems (CSPs) and solved using constraint programming techniques. However, solving a CSP is NP-complete so filtering techniques to reduce the search space are still necessary. Arc-consistency algorithms are widely used to prune the search space. The concept of arc-consistency is bidirectional, i.e., it must be ensured in both directions of the constraint (direct and inverse constraints). Two of the most well-known and frequently used arc-consistency algorithms for filtering CSPs are AC3 and AC4. These algorithms repeatedly carry out revisions and require support checks for identifying and deleting all unsupported values from the domains. Nevertheless, many revisions are ineffective, i.e., they cannot delete any value and consume a lot of checks and time. In this paper, we present AC4-OP, an optimized version of AC4 that manages the binary and non-normalized constraints in only one direction, storing the inverse founded supports for their later evaluation. Thus, it reduces the propagation phase avoiding unnecessary or ineffective checking. The use of AC4-OP reduces the number of constraint checks by 50% while pruning the same search space as AC4. The evaluation section shows the improvement of AC4-OP over AC4, AC6 and AC7 in random and non-normalized instances.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1457
Author(s):  
Dieyan Liang ◽  
Hong Shen

As an important application of wireless sensor networks (WSNs), deployment of mobile sensors to periodically monitor (sweep cover) a set of points of interest (PoIs) arises in various applications, such as environmental monitoring and data collection. For a set of PoIs in an Eulerian graph, the point sweep coverage problem of deploying the fewest sensors to periodically cover a set of PoIs is known to be Non-deterministic Polynomial Hard (NP-hard), even if all sensors have the same velocity. In this paper, we consider the problem of finding the set of PoIs on a line periodically covered by a given set of mobile sensors that has the maximum sum of weight. The problem is first proven NP-hard when sensors are with different velocities in this paper. Optimal and approximate solutions are also presented for sensors with the same and different velocities, respectively. For M sensors and N PoIs, the optimal algorithm for the case when sensors are with the same velocity runs in O(MN) time; our polynomial-time approximation algorithm for the case when sensors have a constant number of velocities achieves approximation ratio 12; for the general case of arbitrary velocities, 12α and 12(1−1/e) approximation algorithms are presented, respectively, where integer α≥2 is the tradeoff factor between time complexity and approximation ratio.


Author(s):  
Jin-Fan Liu ◽  
Karim A. Abdel-Malek

Abstract A formulation of a graph problem for scheduling parallel computations of multibody dynamic analysis is presented. The complexity of scheduling parallel computations for a multibody dynamic analysis is studied. The problem of finding a shortest critical branch spanning tree is described and transformed to a minimum radius spanning tree, which is solved by an algorithm of polynomial complexity. The problems of shortest critical branch minimum weight spanning tree (SCBMWST) and the minimum weight shortest critical branch spanning tree (MWSCBST) are also presented. Both problems are shown to be NP-hard by proving that the bounded critical branch bounded weight spanning tree (BCBBWST) problem is NP-complete. It is also shown that the minimum computational cost spanning tree (MCCST) is at least as hard as SCBMWST or MWSCBST problems, hence itself an NP-hard problem. A heuristic approach to solving these problems is developed and implemented, and simulation results are discussed.


2010 ◽  
Vol 10 (1&2) ◽  
pp. 141-151
Author(s):  
S. Beigi

Although it is believed unlikely that $\NP$-hard problems admit efficient quantum algorithms, it has been shown that a quantum verifier can solve NP-complete problems given a "short" quantum proof; more precisely, NP\subseteq QMA_{\log}(2) where QMA_{\log}(2) denotes the class of quantum Merlin-Arthur games in which there are two unentangled provers who send two logarithmic size quantum witnesses to the verifier. The inclusion NP\subseteq QMA_{\log}(2) has been proved by Blier and Tapp by stating a quantum Merlin-Arthur protocol for 3-coloring with perfect completeness and gap 1/24n^6. Moreover, Aaronson et al. have shown the above inclusion with a constant gap by considering $\widetilde{O}(\sqrt{n})$ witnesses of logarithmic size. However, we still do not know if QMA_{\log}(2) with a constant gap contains NP. In this paper, we show that 3-SAT admits a QMA_{\log}(2) protocol with the gap 1/n^{3+\epsilon}} for every constant \epsilon>0.


2001 ◽  
Vol 12 (04) ◽  
pp. 533-550 ◽  
Author(s):  
WING-KAI HON ◽  
TAK-WAH LAM

The nearest neighbor interchange (nni) distance is a classical metric for measuring the distance (dissimilarity) between evolutionary trees. It has been known that computing the nni distance is NP-complete. Existing approximation algorithms can attain an approximation ratio log n for unweighted trees and 4 log n for weighted trees; yet these algorithms are limited to degree-3 trees. This paper extends the study of nni distance to trees with non-uniform degrees. We formulate the necessary and sufficient conditions for nni transformation and devise more topology-sensitive approximation algorithms to handle trees with non-uniform degrees. The approximation ratios are respectively [Formula: see text] and [Formula: see text] for unweighted and weighted trees, where d ≥ 4 is the maximum degree of the input trees.


Author(s):  
Oles Fedoruk

The paper analyzes different sources of anthroponyms in the original and final texts of P. Kulish’s novel “Chorna Rada: Khronika 1663 Roku” (“The Black Council: A Chronicle of the Year 1663”). Three types of sources have been identified: the historical prototypes, names and surnames of Kulish’s friends, and archival (documentary) records. In addition, numerous notes in the early editions of the Russian novel contain references to the works of various people (M. Markevych, D. Bantysh-Kamenskyi, V. Kokhovskyi, etc.). The last group of anthroponyms stands outside of the plot, and the paper does not focus on it. The historical and autobiographical sources of anthroponyms are generally known. Among the first are prototypes of two hetmans — Yakym Somko and Ivan Briukhovetskyi, military secretary M. Vukhaievych, regimental osaul M. Hvyntovka. The second group comprises the occasional characters Hordii Kostomara (a historian M. Kostomarov), Ivan Yusko (a teacher I. Yuskevych-Kraskovskyi), Hulak (M. Hulak, the founder of The Brotherhood of Saints Cyril and Methodius), Bilozerets (Kulish’s brother-in-law V. Bilozerskyi), Petro Serdiuk (Kulish’s close friend Petro Serdiukov), Oleksa Senchylo (teacher Oleksa Senchylo-Stefanovskyi). In the novel, Kulish drew the love line as a projection of his relationship with Oleksandra Bilozerska and her mother Motrona. The characters of Petro Shramenko, Lesia Cherevanivna and her mother Melaniia have an autobiographical basis. Accordingly, Lesia’s name was also taken from real life. The third group of sources supplying the anthroponyms is archival records. The paper analуzes Kulish’s extracts from the roster of Cossack regiments of the Hetmanate (1741). This source wasn’t used previously. It contains the anthroponyms Vasyl Nevolnyk (‘Slave’), Puhach, Petro Serdiuk, Taranukha, Chepurnyi, Cherevan, Tur, Shramko and Shramchenko, Shkoda, which the author used in various editions of the novel.


2020 ◽  
Vol 1 (2) ◽  
pp. 17-27
Author(s):  
Damir R. Salikhov

“Regulatory sandboxes” are regarded as a special mechanism for setting up experimental regulation in the area of digital innovation (especially in financial technologies), creating a special regime for a limited number of participants and for a limited time.Russiahas its own method of experimental regulation, which is not typical but may be helpful for other jurisdictions. There are three approaches to legal experiments (including digital innovations) inRussia. The first approach is accepting special regulation on different issues. There are recent examples of special laws (e.g. Federal Law on the experiment with artificial intelligence technologies inMoscow). An alternative to this option is establishing experimental regulation by an act of the Government if legislation does not prohibit it (e.g. labeling with means of identification). The second approach deals only with Fintech innovations and provides a special mechanism to pilot models of innovative financial technologies. The participants of such a “sandbox” may create a close-to-life model in order to estimate the effects and risks. If the model works fine, the regulation may be amended. The third approach works with creating a universal mechanism of real-life experiments in the sphere of digital innovations based on the special Federal Law and the specific decision of the Government of theRussian Federationor the Bank of Russia in the financial sphere. The author compares the three approaches and their implementation within the framework of Russian legislation and practice and concludes that this experience may be used by developing countries with inflexible regulation, in order to facilitate the development of digital innovations.


Sign in / Sign up

Export Citation Format

Share Document