scholarly journals An efficient exact algorithm for computing all pairwise distances between reconciliations in the duplication-transfer-loss model

2019 ◽  
Vol 20 (S20) ◽  
Author(s):  
Santi Santichaivekin ◽  
Ross Mawhorter ◽  
Ran Libeskind-Hadas

Abstract Background Maximum parsimony reconciliation in the duplication-transfer-loss model is widely used in studying the evolutionary histories of genes and species and in studying coevolution of parasites and their hosts and pairs of symbionts. While efficient algorithms are known for finding maximum parsimony reconciliations, the number of reconciliations can grow exponentially in the size of the trees. An understanding of the space of maximum parsimony reconciliations is necessary to determine whether a single reconciliation can adequately represent the space or whether multiple representative reconciliations are needed. Results We show that for any instance of the reconciliation problem, the distribution of pairwise distances can be computed exactly by an efficient polynomial-time algorithm with respect to several different distance metrics. We describe the algorithm, analyze its asymptotic worst-case running time, and demonstrate its utility and viability on a large biological dataset. Conclusions This result provides new insights into the structure of the space of maximum parsimony reconciliations. These insights are likely to be useful in the wide range of applications that employ reconciliation methods.

2005 ◽  
Vol 16 (05) ◽  
pp. 913-928 ◽  
Author(s):  
PIOTR FALISZEWSKI ◽  
LANE A. HEMASPAANDRA

Informally put, the semifeasible sets are the class of sets having a polynomial-time algorithm that, given as input any two strings of which at least one belongs to the set, will choose one that does belong to the set. We provide a tutorial overview of the advice complexity of the semifeasible sets. No previous familiarity with either the semifeasible sets or advice complexity will be assumed, and when we include proofs we will try to make the material as accessible as possible via providing intuitive, informal presentations. Karp and Lipton introduced advice complexity about a quarter of a century ago.18 Advice complexity asks, for a given power of interpreter, how many bits of "help" suffice to accept a given set. Thus, this is a notion that contains aspects both of descriptional/informational complexity and of computational complexity. We will see that for some powers of interpreter the (worst-case) complexity of the semifeasible sets is known right down to the bit (and beyond), but that for the most central power of interpreter—deterministic polynomial time—the complexity is currently known only to be at least linear and at most quadratic. While overviewing the advice complexity of the semifeasible sets, we will stress also the issue of whether the functions at the core of semifeasibility—so-called selector functions—can without cost be chosen to possess such algebraic properties as commutativity and associativity. We will see that this is relevant, in ways both potential and actual, to the study of the advice complexity of the semifeasible sets.


2015 ◽  
Vol 14 (05) ◽  
pp. 1111-1128 ◽  
Author(s):  
Özgür Özpeynirci ◽  
Cansu Kandemir

In this study, we work on the order picking problem (OPP) in a specially designed warehouse with a single picker. Ratliff and Rosenthal [Operations Research31(3) (1983) 507–521] show that the special design of the warehouse and use of one picker lead to a polynomially solvable case. We address the multiobjective version of this special case and investigate the properties of the nondominated points. We develop an exact algorithm that finds any nondominated point and present an illustrative example. Finally we conduct a computational test and report the results.


Entropy ◽  
2018 ◽  
Vol 20 (4) ◽  
pp. 274 ◽  
Author(s):  
◽  

Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments.


2012 ◽  
Vol 29 (04) ◽  
pp. 1250019 ◽  
Author(s):  
SHISHENG LI ◽  
BAOQIANG FAN

We address the nonresumable version of the scheduling problem with proportionally deteriorating jobs on a single machine subject to availability constraints. The objective is to minimize the total weighted completion time. We show that there exists no polynomial-time algorithm with a constant worst-case ratio for the problem with two nonavailability intervals unless [Formula: see text]. Furthermore, we propose a pseudo-polynomial-time algorithm and a fully polynomial-time approximation scheme for the problem with a single nonavailability interval.


2021 ◽  
Author(s):  
Xuanxiang Huang ◽  
Yacine Izza ◽  
Alexey Ignatiev ◽  
Joao Marques-Silva

Recent work has shown that not only decision trees (DTs) may not be interpretable but also proposed a polynomial-time algorithm for computing one PI-explanation of a DT. This paper shows that for a wide range of classifiers, globally referred to as decision graphs, and which include decision trees and binary decision diagrams, but also their multi-valued variants, there exist polynomial-time algorithms for computing one PI-explanation. In addition, the paper also proposes a polynomial-time algorithm for computing one contrastive explanation. These novel algorithms build on explanation graphs (XpG's). XpG's denote a graph representation that enables both theoretical and practically efficient computation of explanations for decision graphs. Furthermore, the paper proposes a practically efficient solution for the enumeration of explanations, and studies the complexity of deciding whether a given feature is included in some explanation. For the concrete case of decision trees, the paper shows that the set of all contrastive explanations can be enumerated in polynomial time. Finally, the experimental results validate the practical applicability of the algorithms proposed in the paper on a wide range of publicly available benchmarks.


Algorithmica ◽  
2021 ◽  
Author(s):  
Eleni C. Akrida ◽  
Argyrios Deligkas ◽  
Themistoklis Melissourgos ◽  
Paul G. Spirakis

AbstractWe study a security game over a network played between a defender and kattackers. Every attacker chooses, probabilistically, a node of the network to damage. The defender chooses, probabilistically as well, a connected induced subgraph of the network of $$\lambda $$ λ nodes to scan and clean. Each attacker wishes to maximize the probability of escaping her cleaning by the defender. On the other hand, the goal of the defender is to maximize the expected number of attackers that she catches. This game is a generalization of the model from the seminal paper of Mavronicolas et al. Mavronicolas et al. (in: International symposium on mathematical foundations of computer science, MFCS, pp 717–728, 2006). We are interested in Nash equilibria of this game, as well as in characterizing defense-optimal networks which allow for the best equilibrium defense ratio; this is the ratio of k over the expected number of attackers that the defender catches in equilibrium. We provide a characterization of the Nash equilibria of this game and defense-optimal networks. The equilibrium characterizations allow us to show that even if the attackers are centrally controlled the equilibria of the game remain the same. In addition, we give an algorithm for computing Nash equilibria. Our algorithm requires exponential time in the worst case, but it is polynomial-time for $$\lambda $$ λ constantly close to 1 or n. For the special case of tree-networks, we further refine our characterization which allows us to derive a polynomial-time algorithm for deciding whether a tree is defense-optimal and if this is the case it computes a defense-optimal Nash equilibrium. On the other hand, we prove that it is $${\mathtt {NP}}$$ NP -hard to find a best-defense strategy if the tree is not defense-optimal. We complement this negative result with a polynomial-time constant-approximation algorithm that computes solutions that are close to optimal ones for general graphs. Finally, we provide asymptotically (almost) tight bounds for the Price of Defense for any $$\lambda $$ λ ; this is the worst equilibrium defense ratio over all graphs.


2003 ◽  
Vol 13 (03) ◽  
pp. 189-229 ◽  
Author(s):  
Jean-Daniel Boissonnat ◽  
Sylvain Lazard

In this paper, we consider the problem of computing shortest paths of bounded curvature amidst obstacles in the plane. More precisely, given two prescribed initial and final configurations (specifying the location and the direction of travel) and a set of obstacles in the plane, we want to compute a shortest C1 path joining those two configurations, avoiding the obstacles, and with the further constraint that, on each C2 piece, the radius of curvature is at least 1. In this paper, we consider the case of moderate obstacles and present a polynomial-time exact algorithm to solve this problem.


2021 ◽  
Vol 14 (10) ◽  
pp. 1756-1768
Author(s):  
Tianyuan Jin ◽  
Yu Yang ◽  
Renchi Yang ◽  
Jieming Shi ◽  
Keke Huang ◽  
...  

Given a set V , the problem of unconstrained submodular maximization with modular costs (USM-MC) asks for a subset S ⊆ V that maximizes f ( S ) - c ( S ), where f is a non-negative, monotone, and submodular function that gauges the utility of S , and c is a non-negative and modular function that measures the cost of S. This problem finds applications in numerous practical scenarios, such as profit maximization in viral marketing on social media. This paper presents ROI-Greedy, a polynomial time algorithm for USM-MC that returns a solution S satisfying [EQUATION], where S * is the optimal solution to USM-MC. To our knowledge, ROI-Greedy is the first algorithm that provides such a strong approximation guarantee. In addition, we show that this worst-case guarantee is tight , in the sense that no polynomial time algorithm can ensure [EQUATION], for any ϵ > 0. Further, we devise a non-trivial extension of ROI-Greedy to solve the profit maximization problem, where the precise value of f ( S ) for any set S is unknown and can only be approximated via sampling. Extensive experiments on benchmark datasets demonstrate that ROI-Greedy significantly outperforms competing methods in terms of the tradeoff between efficiency and solution quality.


10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


Sign in / Sign up

Export Citation Format

Share Document