2-Approximation Polynomial-Time Algorithm for a Cardinality-Weighted 2-Partitioning Problem of a Sequence

Author(s):  
Alexander Kel’manov ◽  
Sergey Khamidullin ◽  
Anna Panasenko
1994 ◽  
Vol 03 (03) ◽  
pp. 395-405
Author(s):  
J. HARALAMBIDES ◽  
S. TRAGOUDAS

The problem of partitioning the elements of a graph G=(V, E) into two equal size sets A and B that share at most d elements such that the total number of edges (u, v), u∈A−B, v∈B−A is minimized, arises in the areas of Hypermedia Organization, Network Integrity, and VLSI Layout. We formulate the problem in terms of element duplication, where each element c∈A∩B is substituted by two copies c′∈A and c″∈B As a result, edges incident to c′ or c″ need not count in the cost of the partition. We show that this partitioning problem is NP-hard in general, and we present a solution which utilizes an optimal polynomial time algorithm for the special case where G is a series-parallel graph. We also discuss special other cases where the partitioning problem or variations are polynomially solvable.


2004 ◽  
Vol 14 (01n02) ◽  
pp. 85-104 ◽  
Author(s):  
XIAODONG WU ◽  
DANNY Z. CHEN ◽  
JAMES J. MASON ◽  
STEVEN R. SCHMID

Data clustering is an important theoretical topic and a sharp tool for various applications. It is a task frequently arising in geometric computing. The main objective of data clustering is to partition a given data set into clusters such that the data items within the same cluster are "more" similar to each other with respect to certain measures. In this paper, we study the pairwise data clustering problem with pairwise similarity/dissimilarity measures that need not satisfy the triangle inequality. By using a criterion, called the minimum normalized cut, we model the general pairwise data clustering problem as a graph partition problem. The graph partition problem based on minimizing the normalized cut is known to be NP-hard. For an undirected weighted graph of n vertices, we present a ((4+o(1)) In n)-approximation polynomial time algorithm for the minimum normalized cut problem; this is the first provably good approximation polynomial time algorithm for the problem. We also give a more efficient algorithm for this problem by sacrificing the approximation ratio slightly. Further, our scheme achieves a ((2+o(1)) In n)-approximation polynomial time algorithm for computing the sparsest cuts in edge-weighted and vertex-weighted undirected graphs, improving the previously best known approximation ratio by a constant factor. Some applications and implementation work of our approximation normalized cut algorithms are also discussed.


10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


Sign in / Sign up

Export Citation Format

Share Document