scholarly journals Visiting a Polygon on the Optimal Way to a Query Point

Author(s):  
Ramtin Khosravi ◽  
Mohammad Ghodsi
Keyword(s):  
2013 ◽  
Vol 23 (04n05) ◽  
pp. 335-355 ◽  
Author(s):  
HAIM KAPLAN ◽  
MICHA SHARIR

Let P be a set of n points in the plane. We present an efficient algorithm for preprocessing P, so that, for a given query point q, we can quickly report the largest disk that contains q but its interior is disjoint from P. The storage required by the data structure is O(n log n), the preprocessing cost is O(n log 2 n), and a query takes O( log 2 n) time. We also present an alternative solution with an improved query cost and with slightly worse storage and preprocessing requirements.


Author(s):  
Maytham Safar ◽  
Dariush Ebrahimi

The continuous K nearest neighbor (CKNN) query is an important type of query that finds continuously the KNN to a query point on a given path. We focus on moving queries issued on stationary objects in Spatial Network Database (SNDB) The result of this type of query is a set of intervals (defined by split points) and their corresponding KNNs. This means that the KNN of an object traveling on one interval of the path remains the same all through that interval, until it reaches a split point where its KNNs change. Existing methods for CKNN are based on Euclidean distances. In this paper we propose a new algorithm for answering CKNN in SNDB where the important measure for the shortest path is network distances rather than Euclidean distances. We propose DAR and eDAR algorithms to address CKNN queries based on the progressive incremental network expansion (PINE) technique. Our experiments show that the eDAR approach has better response time, and requires fewer shortest distance computations and KNN queries than approaches that are based on VN3 using IE.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yanduo Ren ◽  
Jiangbo Qian ◽  
Yihong Dong ◽  
Yu Xin ◽  
Huahui Chen

Nearest neighbour search (NNS) is the core of large data retrieval. Learning to hash is an effective way to solve the problems by representing high-dimensional data into a compact binary code. However, existing learning to hash methods needs long bit encoding to ensure the accuracy of query, and long bit encoding brings large cost of storage, which severely restricts the long bit encoding in the application of big data. An asymmetric learning to hash with variable bit encoding algorithm (AVBH) is proposed to solve the problem. The AVBH hash algorithm uses two types of hash mapping functions to encode the dataset and the query set into different length bits. For datasets, the hash code frequencies of datasets after random Fourier feature encoding are statistically analysed. The hash code with high frequency is compressed into a longer coding representation, and the hash code with low frequency is compressed into a shorter coding representation. The query point is quantized to a long bit hash code and compared with the same length cascade concatenated data point. Experiments on public datasets show that the proposed algorithm effectively reduces the cost of storage and improves the accuracy of query.


2016 ◽  
Vol 43 (4) ◽  
pp. 440-457
Author(s):  
Youngki Park ◽  
Heasoo Hwang ◽  
Sang-goo Lee

Finding k-nearest neighbours ( k-NN) is one of the most important primitives of many applications such as search engines and recommendation systems. However, its computational cost is extremely high when searching for k-NN points in a huge collection of high-dimensional points. Locality-sensitive hashing (LSH) has been introduced for an efficient k-NN approximation, but none of the existing LSH approaches clearly outperforms others. We propose a novel LSH approach, Signature Selection LSH (S2LSH), which finds approximate k-NN points very efficiently in various datasets. It first constructs a large pool of highly diversified signature regions with various sizes. Given a query point, it dynamically generates a query-specific signature region by merging highly effective signature regions selected from the signature pool. We also suggest S2LSH-M, a variant of S2LSH, which processes multiple queries more efficiently by using query-specific features and optimization techniques. Extensive experiments show the performance superiority of our approaches in diverse settings.


2008 ◽  
Vol 39 (2) ◽  
pp. 78-90 ◽  
Author(s):  
Alireza Zarei ◽  
Mohammad Ghodsi

2011 ◽  
Vol 21 (02) ◽  
pp. 179-188 ◽  
Author(s):  
OTFRIED CHEONG ◽  
ANTOINE VIGNERON ◽  
JUYOUNG YON

Reverse nearest neighbor queries are defined as follows: Given an input point set P, and a query point q, find all the points p in P whose nearest point in P ∪ {q} \ {p} is q. We give a data structure to answer reverse nearest neighbor queries in fixed-dimensional Euclidean space. Our data structure uses O(n) space, its preprocessing time is O(n log n), and its query time is O( log n).


2014 ◽  
Vol 490-491 ◽  
pp. 1293-1297
Author(s):  
Liang Zhu ◽  
Fei Fei Liu ◽  
Wu Chen ◽  
Qing Ma

Top-Nqueries are employed in a wide range of applications to obtain a ranked list of data objects that have the highest aggregate scores over certain attributes. The threshold algorithm (TA) is an important method in many scenarios. However, TA is effective only when the ranking function is monotone and the query point is fixed. In the paper, we propose an approach that alleviates the limitations of TA-like methods for processing top-Nqueries. Based onp-norm distances as ranking functions, our methods utilize the fundamental principle of Functional Analysis so that the candidate tuples of top-Nquery with ap-norm distance can be obtained by the Maximum distance. We conduct extensive experiments to prove the effectiveness and efficiency of our method for both low-dimensional (2, 3 and 4) and high-dimensional (25,50 and 104) data.


2021 ◽  
Author(s):  
Bo Shen ◽  
Raghav Gnanasambandam ◽  
Rongxuan Wang ◽  
Zhenyu Kong

In many scientific and engineering applications, Bayesian optimization (BO) is a powerful tool for hyperparameter tuning of a machine learning model, materials design and discovery, etc. BO guides the choice of experiments in a sequential way to find a good combination of design points in as few experiments as possible. It can be formulated as a problem of optimizing a “black-box” function. Different from single-task Bayesian optimization, Multi-task Bayesian optimization is a general method to efficiently optimize multiple different but correlated “black-box” functions. The previous works in Multi-task Bayesian optimization algorithm queries a point to be evaluated for all tasks in each round of search, which is not efficient. For the case where different tasks are correlated, it is not necessary to evaluate all tasks for a given query point. Therefore, the objective of this work is to develop an algorithm for multi-task Bayesian optimization with automatic task selection so that only one task evaluation is needed per query round. Specifically, a new algorithm, namely, multi-task Gaussian process upper confidence bound (MT-GPUCB), is proposed to achieve this objective. The MT-GPUCB is a two-step algorithm, where the first step chooses which query point to evaluate, and the second step automatically selects the most informative task to evaluate. Under the bandit setting, a theoretical analysis is provided to show that our proposed MT-GPUCB is no-regret under some mild conditions. Our proposed algorithm is verified experimentally on a range of synthetic functions as well as real-world problems. The results clearly show the advantages of our query strategy for both design point and task.


2020 ◽  
Vol 13 (1) ◽  
pp. 17
Author(s):  
Syam Budi Iryanto ◽  
Furqon Hensan Muttaqien ◽  
Rifki Sadikin

Irregular grid interpolation is one of the numerical functions that often used to approximate value on an arbitrary location in the area closed by non-regular grid pivot points. In this paper, we propose method for achieving efficient computation time of radial basis function-based non-regular grid interpolation on a cylindrical coordinate. Our method consist of two stages. The first stage is the computation of weights from solving linear RBF systems constructed by known pivot points. We divide the volume into many subvolumes. At second stages, interpolation on an arbitrary point could be done using weights calculated on the first stage. At first, we find the nearest point with the query point by structuring pivot points in a K-D tree structure. After that, using the closest pivot point, we could compute the interpolated value with RBF functions. We present the performance of our method based on computation time on two stages and its precision by calculating the mean square error between the interpolated values and analytic functions. Based on the performance evaluation, our method is acceptable.


Sign in / Sign up

Export Citation Format

Share Document