scholarly journals REVERSE NEAREST NEIGHBOR QUERIES IN FIXED DIMENSION

2011 ◽  
Vol 21 (02) ◽  
pp. 179-188 ◽  
Author(s):  
OTFRIED CHEONG ◽  
ANTOINE VIGNERON ◽  
JUYOUNG YON

Reverse nearest neighbor queries are defined as follows: Given an input point set P, and a query point q, find all the points p in P whose nearest point in P ∪ {q} \ {p} is q. We give a data structure to answer reverse nearest neighbor queries in fixed-dimensional Euclidean space. Our data structure uses O(n) space, its preprocessing time is O(n log n), and its query time is O( log n).

2010 ◽  
Vol 22 (4) ◽  
pp. 550-564 ◽  
Author(s):  
Muhammad Aamir Cheema ◽  
Xuemin Lin ◽  
Wei Wang ◽  
Wenjie Zhang ◽  
Jian Pei

2006 ◽  
Vol 15 (3) ◽  
pp. 229-249 ◽  
Author(s):  
Rimantas Benetis ◽  
Christian S. Jensen ◽  
Gytis Karĉiauskas ◽  
Simonas Ŝaltenis

2013 ◽  
Vol 10 (7) ◽  
pp. 1858-1861
Author(s):  
Y. Jagruthi ◽  
Dr. Y. Ramadevi ◽  
A. Sangeeta

Classification is one of the most important data mining techniques. It belongs to supervised learning. The objective of classification is to assign class label to unlabelled data. As data is growing rapidly, handling it has become a major concern. So preprocessing should be done before classification and hence data reduction is essential. Data reduction is to extract a subset of features from a set of features of a data set. Data reduction helps in decreasing the storage requirement and increases the efficiency of classification. A way to measure data reduction is reduction rate. The main thing here is choosing representative samples to the final data set. There are many instance selection algorithms which are based on nearest neighbor decision rule (NN). These algorithms select samples on incremental strategy or decremental strategy. Both the incremental algorithms and decremental algorithms take much processing time as they iteratively scan the dataset. There is another instance selection algorithm, reverse nearest neighbor reduction (RNNR) based on the concept of reverse nearest neighbor (RNN). RNNR does not iteratively scan the data set. In this paper, we extend the RNN to RkNN and we use the concept of RNNR to RkNN. RkNN finds the query objects that has the query point as their k nearest-neighbors. Our approach utilizes the advantage of RNN and proposes to use the concept of RkNN. We have taken the dataset of theatres, hospitals and restaurants and extracted the sample set. Classification has been done the resultant sample data set. We observe two parameters here they are classification accuracy and reduction rate.


2008 ◽  
Vol 18 (01n02) ◽  
pp. 131-160 ◽  
Author(s):  
DAVID EPPSTEIN ◽  
MICHAEL T. GOODRICH ◽  
JONATHAN Z. SUN

We present a new multi-dimensional data structure, which we call the skip quadtree (for point data in R2) or the skip octree (for point data in Rd, with constant d > 2). Our data structure combines the best features of two well-known data structures, in that it has the well-defined “box”-shaped regions of region quadtrees and the logarithmic-height search and update hierarchical structure of skip lists. Indeed, the bottom level of our structure is exactly a region quadtree (or octree for higher dimensional data). We describe efficient algorithms for inserting and deleting points in a skip quadtree, as well as fast methods for performing point location, approximate range, and approximate nearest neighbor queries.


Sign in / Sign up

Export Citation Format

Share Document