A Near-optimal Protocol for the Subset Selection Problem in RFID Systems

Author(s):  
Xiujun Wang ◽  
Zhi Liu ◽  
Susumu Ishihara ◽  
Zhe Dang ◽  
Jie Li
2017 ◽  
Vol 27 (04) ◽  
pp. 277-296 ◽  
Author(s):  
Vincent Froese ◽  
Iyad Kanj ◽  
André Nichterlein ◽  
Rolf Niedermeier

We study the General Position Subset Selection problem: Given a set of points in the plane, find a maximum-cardinality subset of points in general position. We prove that General Position Subset Selection is NP-hard, APX-hard, and present several fixed-parameter tractability results for the problem as well as a subexponential running time lower bound based on the Exponential Time Hypothesis.


1994 ◽  
Vol 44 (1-2) ◽  
pp. 41-48
Author(s):  
Tong-An Hsu

Let A1, A2,…, A k be k alternatives for a decision problem. Saaty uses ratio scale (π1, π2,…, π k) for the priorities of the alternatives. In a subset selection problem, we derive some selection procedure to select a subset from the k alternatives which includes the largest priority.


2006 ◽  
Vol 169 (2) ◽  
pp. 477-489 ◽  
Author(s):  
Félix Garcı́a López ◽  
Miguel Garcı́a Torres ◽  
Belén Melián Batista ◽  
José A. Moreno Pérez ◽  
J. Marcos Moreno-Vega

2019 ◽  
Vol 27 (4) ◽  
pp. 611-637
Author(s):  
Benoît Groz ◽  
Silviu Maniu

The hypervolume subset selection problem (HSSP) aims at approximating a set of [Formula: see text] multidimensional points in [Formula: see text] with an optimal subset of a given size. The size [Formula: see text] of the subset is a parameter of the problem, and an approximation is considered best when it maximizes the hypervolume indicator. This problem has proved popular in recent years as a procedure for multiobjective evolutionary algorithms. Efficient algorithms are known for planar points ([Formula: see text]), but there are hardly any results on HSSP in larger dimensions ([Formula: see text]). So far, most algorithms in higher dimensions essentially enumerate all possible subsets to determine the optimal one, and most of the effort has been directed toward improving the efficiency of hypervolume computation. We propose efficient algorithms for the selection problem in dimension 3 when either [Formula: see text] or [Formula: see text] is small, and extend our techniques to arbitrary dimensions for [Formula: see text].


2020 ◽  
Author(s):  
Mohsen Joneidi ◽  
Saeed Vahidian ◽  
Ashkan Esmaeili ◽  
Siavash Khodadadeh

We propose a novel technique for finding representatives from a large, unsupervised dataset. The approach is based on the concept of self-rank, defined as the minimum number of samples needed to reconstruct all samples with an accuracy proportional to the rank-$K$ approximation. Our proposed algorithm enjoys linear complexity w.r.t. the size of original dataset and simultaneously it provides an adaptive upper bound for approximation ratio. These favorable characteristics result in filling a historical gap between practical and theoretical methods in finding representatives.<br>


Sign in / Sign up

Export Citation Format

Share Document