Individual Testing of Independent Items in Optimal Group Testing

1988 ◽  
Vol 2 (1) ◽  
pp. 23-29 ◽  
Author(s):  
Y. C. Yao ◽  
F. K. Hwang

We consider the group testing problem for a set of independent items I = [I1,… In] where Ii, has probability pi, of being defective and probability qi = 1 – pi of being good. The problem is to classify all items as good or defective with a minimum expected number of group tests where a group test is a test on a subset S of I with two possible outcomes: either S is pure (contains no defective) or S is contaminated (contains at least one defective, with no information provided about which or how many). No polynomial-time algorithm is known for the group testing problem even for the special case pi = p for all i. Hence, any method that reduces the size of the problem is very helpful. In this paper, we give such a method by providing a simple condition to screen items that should be tested (only) individually. This condition leads to a necessary and sufficient condition for the individual testing algorithm to be optimal, generalizing a result of Unger [1] for the special case of identical pi.

2018 ◽  
Vol 28 (01) ◽  
pp. 39-56 ◽  
Author(s):  
Jude Buot ◽  
Mikio Kano

Let [Formula: see text] and [Formula: see text] be two disjoint sets of red points and blue points, respectively, in the plane in general position. Assign a weight [Formula: see text] to each red point and a weight [Formula: see text] to each blue point, where [Formula: see text] and [Formula: see text] are positive integers. Define the weight of a region in the plane as the sum of the weights of red and blue points in it. We give necessary and sufficient conditions for the existence of a line that bisects the weight of the plane whenever the total weight [Formula: see text] is [Formula: see text], for some integer [Formula: see text]. Moreover, we look closely into the special case where [Formula: see text] and [Formula: see text] since this case is important to generate a weight-equitable subdivision of the plane. Among other results, we show that for any configuration of [Formula: see text] with total weight [Formula: see text], for some integer [Formula: see text] and odd integer [Formula: see text], the plane can be subdivided into [Formula: see text] convex regions of weight [Formula: see text] if and only if [Formula: see text]. Using the proofs of the main result, we also give a polynomial time algorithm in finding a weight-equitable subdivision in the plane.


Author(s):  
Soh Kumabe ◽  
Takanori Maehara

The b-matching game is a cooperative game defined on a graph. The game generalizes the matching game to allow each individual to have more than one partner. The game has several applications, such as the roommate assignment, the multi-item version of the seller-buyer assignment, and the international kidney exchange. Compared with the standard matching game, the b-matching game is computationally hard. In particular, the core non-emptiness problem and the core membership problem are co-NP-hard. Therefore, we focus on the convexity of the game, which is a sufficient condition of the core non-emptiness and often more tractable concept than the core non-emptiness. It also has several additional benefits. In this study, we give a necessary and sufficient condition of the convexity of the b-matching game. This condition also gives an O(n log n + m α(n)) time algorithm to determine whether a given game is convex or not, where n and m are the number of vertices and edges of a given graph, respectively, and α(・) is the inverse-Ackermann function. Using our characterization, we also give a polynomial-time algorithm to compute the Shapley value of a convex b-matching game.


1981 ◽  
Vol 4 (3) ◽  
pp. 531-549 ◽  
Author(s):  
Miklós Szijártó

The correspondence between sequential program schemes and formal languages is well known (Blikle and Mazurkiewicz (1972), Engelfriet (1974)). The situation is more complicated in the case of parallel program schemes, and trace languages (Mazurkiewicz (1977)) have been introduced to describe them. We introduce the concept of the closure of a language on a so called independence relation on the alphabet of the language, and formulate several theorems about them and the trace languages. We investigate the closedness properties of Chomsky classes under closure on independence relations, and as a special case we derive a new necessary and sufficient condition for the regularity of the commutative closure of a language.


1990 ◽  
Vol 4 (4) ◽  
pp. 447-460 ◽  
Author(s):  
Coastas Courcobetis ◽  
Richard Weber

Items of various types arrive at a bin-packing facility according to random processes and are to be combined with other readily available items of different types and packed into bins using one of a number of possible packings. One might think of a manufacturing context in which randomly arriving subassemblies are to be combined with subassemblies from an existing inventory to assemble a variety of finished products. Packing must be done on-line; that is, as each item arrives, it must be allocated to a bin whose configuration of packing is fixed. Moreover, it is required that the packing be managed in such a way that the readily available items are consumed at predescribed rates, corresponding perhaps to optimal rates for manufacturing these items. At any moment, some number of bins will be partially full. In practice, it is important that the packing be managed so that the expected number of partially full bins remains uniformly bounded in time. We present a necessary and sufficient condition for this goal to be realized and describe an algorithm to achieve it.


2020 ◽  
Vol 92 (1) ◽  
pp. 107-132 ◽  
Author(s):  
Britta Schulze ◽  
Michael Stiglmayr ◽  
Luís Paquete ◽  
Carlos M. Fonseca ◽  
David Willems ◽  
...  

Abstract In this article, we introduce the rectangular knapsack problem as a special case of the quadratic knapsack problem consisting in the maximization of the product of two separate knapsack profits subject to a cardinality constraint. We propose a polynomial time algorithm for this problem that provides a constant approximation ratio of 4.5. Our experimental results on a large number of artificially generated problem instances show that the average ratio is far from theoretical guarantee. In addition, we suggest refined versions of this approximation algorithm with the same time complexity and approximation ratio that lead to even better experimental results.


2002 ◽  
Vol 30 (12) ◽  
pp. 761-770 ◽  
Author(s):  
Xiao-Xiong Gan ◽  
Nathaniel Knox

Given a formal power seriesg(x)=b0+b1x+b2x2+⋯and a nonunitf(x)=a1x+a2x2+⋯, it is well known that the composition ofgwithf,g(f(x)), is a formal power series. If the formal power seriesfabove is not a nonunit, that is, the constant term offis not zero, the existence of the compositiong(f(x))has been an open problem for many years. The recent development investigated the radius of convergence of a composed formal power series likefabove and obtained some very good results. This note gives a necessary and sufficient condition for the existence of the composition of some formal power series. By means of the theorems established in this note, the existence of the composition of a nonunit formal power series is a special case.


1988 ◽  
Vol 25 (3) ◽  
pp. 553-564 ◽  
Author(s):  
Jian Liu ◽  
Peter J. Brockwell

A sufficient condition is derived for the existence of a strictly stationary solution of the general bilinear time series equations. The condition is shown to reduce to the conditions of Pham and Tran (1981) and Bhaskara Rao et al. (1983) in the special cases which they consider. Under the condition specified, a solution is constructed which is shown to be causal, stationary and ergodic. It is moreover the unique causal solution and the unique stationary solution of the defining equations. In the special case when the defining equations contain no non-linear terms, our condition reduces to the well-known necessary and sufficient condition for existence of a causal stationary solution.


10.37236/3388 ◽  
2014 ◽  
Vol 21 (2) ◽  
Author(s):  
Katharina T. Huber ◽  
Mike Steel

It is a classical result that any finite tree with positively weighted edges, and without vertices of degree 2, is uniquely determined by the weighted path distance between each pair of leaves. Moreover, it is possible for a (small) strict subset $\mathcal{L}$ of leaf pairs to suffice for reconstructing the tree and its edge weights, given just the distances between the leaf pairs in $\mathcal{L}$. It is known that any set ${\mathcal L}$ with this property for a tree in which all interior vertices have degree 3 must form a cover  for $T$ - that is, for each interior vertex $v$ of $T$, ${\mathcal L}$ must contain a pair of leaves from each pair of the three components of  $T-v$.  Here we provide a partial converse of this result by showing that if a set ${\mathcal L}$ of leaf pairs forms a cover  of a certain type for such a tree $T$ then $T$ and its edge weights can be uniquely determined from the distances between the pairs of leaves in ${\mathcal L}$. Moreover,  there is a polynomial-time algorithm for achieving this reconstruction. The result establishes a special case of a recent question concerning 'triplet covers', and is relevant to a problem arising in evolutionary genomics.


1994 ◽  
Vol 03 (03) ◽  
pp. 395-405
Author(s):  
J. HARALAMBIDES ◽  
S. TRAGOUDAS

The problem of partitioning the elements of a graph G=(V, E) into two equal size sets A and B that share at most d elements such that the total number of edges (u, v), u∈A−B, v∈B−A is minimized, arises in the areas of Hypermedia Organization, Network Integrity, and VLSI Layout. We formulate the problem in terms of element duplication, where each element c∈A∩B is substituted by two copies c′∈A and c″∈B As a result, edges incident to c′ or c″ need not count in the cost of the partition. We show that this partitioning problem is NP-hard in general, and we present a solution which utilizes an optimal polynomial time algorithm for the special case where G is a series-parallel graph. We also discuss special other cases where the partitioning problem or variations are polynomially solvable.


1992 ◽  
Vol 02 (04) ◽  
pp. 383-416 ◽  
Author(s):  
GORDON WILFONG

Suppose E is a set of labeled points (examples) in some metric space. A subset C of E is said to be a consistent subset ofE if it has the property that for any example e∈E, the label of the closest example in C to e is the same as the label of e. We consider the problem of computing a minimum cardinality consistent subset. Consistent subsets have applications in pattern classification schemes that are based on the nearest neighbor rule. The idea is to replace the training set of examples with as small a consistent subset as possible so as to improve the efficiency of the system while not significantly affecting its accuracy. The problem of finding a minimum size consistent subset of a set of examples is shown to be NP-complete. A special case is described and is shown to be equivalent to an optimal disc cover problem. A polynomial time algorithm for this optimal disc cover problem is then given.


Sign in / Sign up

Export Citation Format

Share Document