A polynomial time algorithm for big data in a special case of minimum constraint removal problem

2019 ◽  
Vol 13 (2) ◽  
pp. 247-254
Author(s):  
Bahram Sadeghi Bigham ◽  
Fariba Noorizadeh ◽  
Salman Khodayifar
10.37236/3388 ◽  
2014 ◽  
Vol 21 (2) ◽  
Author(s):  
Katharina T. Huber ◽  
Mike Steel

It is a classical result that any finite tree with positively weighted edges, and without vertices of degree 2, is uniquely determined by the weighted path distance between each pair of leaves. Moreover, it is possible for a (small) strict subset $\mathcal{L}$ of leaf pairs to suffice for reconstructing the tree and its edge weights, given just the distances between the leaf pairs in $\mathcal{L}$. It is known that any set ${\mathcal L}$ with this property for a tree in which all interior vertices have degree 3 must form a cover  for $T$ - that is, for each interior vertex $v$ of $T$, ${\mathcal L}$ must contain a pair of leaves from each pair of the three components of  $T-v$.  Here we provide a partial converse of this result by showing that if a set ${\mathcal L}$ of leaf pairs forms a cover  of a certain type for such a tree $T$ then $T$ and its edge weights can be uniquely determined from the distances between the pairs of leaves in ${\mathcal L}$. Moreover,  there is a polynomial-time algorithm for achieving this reconstruction. The result establishes a special case of a recent question concerning 'triplet covers', and is relevant to a problem arising in evolutionary genomics.


1994 ◽  
Vol 03 (03) ◽  
pp. 395-405
Author(s):  
J. HARALAMBIDES ◽  
S. TRAGOUDAS

The problem of partitioning the elements of a graph G=(V, E) into two equal size sets A and B that share at most d elements such that the total number of edges (u, v), u∈A−B, v∈B−A is minimized, arises in the areas of Hypermedia Organization, Network Integrity, and VLSI Layout. We formulate the problem in terms of element duplication, where each element c∈A∩B is substituted by two copies c′∈A and c″∈B As a result, edges incident to c′ or c″ need not count in the cost of the partition. We show that this partitioning problem is NP-hard in general, and we present a solution which utilizes an optimal polynomial time algorithm for the special case where G is a series-parallel graph. We also discuss special other cases where the partitioning problem or variations are polynomially solvable.


2015 ◽  
Vol 14 (05) ◽  
pp. 1111-1128 ◽  
Author(s):  
Özgür Özpeynirci ◽  
Cansu Kandemir

In this study, we work on the order picking problem (OPP) in a specially designed warehouse with a single picker. Ratliff and Rosenthal [Operations Research31(3) (1983) 507–521] show that the special design of the warehouse and use of one picker lead to a polynomially solvable case. We address the multiobjective version of this special case and investigate the properties of the nondominated points. We develop an exact algorithm that finds any nondominated point and present an illustrative example. Finally we conduct a computational test and report the results.


Algorithms ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 110
Author(s):  
David Schaller ◽  
Manuela Geiß ◽  
Marc Hellmuth ◽  
Peter F. Stadler

Best match graphs (BMGs) are vertex-colored digraphs that naturally arise in mathematical phylogenetics to formalize the notion of evolutionary closest genes w.r.t. an a priori unknown phylogenetic tree. BMGs are explained by unique least resolved trees. We prove that the property of a rooted, leaf-colored tree to be least resolved for some BMG is preserved by the contraction of inner edges. For the special case of two-colored BMGs, this leads to a characterization of the least resolved trees (LRTs) of binary-explainable trees and a simple, polynomial-time algorithm for the minimum cardinality completion of the arc set of a BMG to reach a BMG that can be explained by a binary tree.


2009 ◽  
Vol 01 (02) ◽  
pp. 253-265 ◽  
Author(s):  
TONI R. FARLEY ◽  
CHARLES J. COLBOURN

Network operation may require that a specified number k of nodes be able to communicate via paths consisting of operating edges and nodes. In an environment of node and edge failure, this leads to associated reliability measures. When the k nodes are known in advance, this has been widely studied as k-terminal reliability; when the k nodes are chosen uniformly at random, this has been studied as k-resilience. A third notion, when it suffices to have anyk nodes communicate, is related to the expected size of the largest component in the network. We generalize these three measures to the probability that given h nodes chosen in advance and i nodes chosen at random, they appear in a component of size at least k = h + i + j. As expected, for general networks, for most choices of (h, i, j) the computation is #P-complete and hence unlikely to admit a polynomial time algorithm. We develop polynomial time algorithms in the special case that the network is series-parallel, which subsume and generalize earlier methods for k-terminal reliability and k-resilience.


10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


Sign in / Sign up

Export Citation Format

Share Document