scholarly journals Finding Visible kNN Objects in the Presence of Obstacles within the User’s View Field †

2019 ◽  
Vol 8 (3) ◽  
pp. 151
Author(s):  
I-Fang Su ◽  
Ding-Li Chen ◽  
Chiang Lee ◽  
Yu-Chi Chung

In many spatial applications, users are only interested in data objects that are visible to them. Hence, finding visible data objects is an important operation in these real-world spatial applications. This study addressed a new type of spatial query, the View field-aware Visible k Nearest Neighbor (V2-kNN) query. Given the location of a user and his/her view field, a V2-kNN query finds data object p so that p is the nearest neighbor of and visible to the user, where visible means the data object is (1) not hidden by obstacles and (2) inside the view field of the user. Previous works on visible NN queries considered only one of these two factors, but not both. To the best of our knowledge, this work is the first to consider both the effect of obstacles and the restriction of the view field in finding the solutions. To support efficient processing of V2-kNN queries, a grid structure is used to index data objects and obstacles. Pruning heuristics are also designed so that only data objects and obstacles relevant to the final query result are accessed. A comprehensive experimental evaluation using both real and synthetic datasets is performed to verify the effectiveness of the proposed algorithms.

2014 ◽  
Vol 10 (4) ◽  
pp. 385-405 ◽  
Author(s):  
Yuka Komai ◽  
Yuya Sasaki ◽  
Takahiro Hara ◽  
Shojiro Nishio

In a kNN query processing method, it is important to appropriately estimate the range that includes kNNs. While the range could be estimated based on the node density in the entire network, it is not always appropriate because the density of nodes in the network is not uniform. In this paper, we propose two kNN query processing methods in MANETs where the density of nodes is ununiform; the One-Hop (OH) method and the Query Log (QL) method. In the OH method, the nearest node from the point specified by the query acquires its neighbors' location and then determines the size of a circle region (the estimated kNN circle) which includes kNNs with high probability. In the QL method, a node which relays a reply of a kNN query stores the information on the query result for future queries.


Author(s):  
Sikha Bagui ◽  
Arup Kumar Mondal ◽  
Subhash Bagui

In this work the authors present a parallel k nearest neighbor (kNN) algorithm using locality sensitive hashing to preprocess the data before it is classified using kNN in Hadoop's MapReduce framework. This is compared with the sequential (conventional) implementation. Using locality sensitive hashing's similarity measure with kNN, the iterative procedure to classify a data object is performed within a hash bucket rather than the whole data set, greatly reducing the computation time needed for classification. Several experiments were run that showed that the parallel implementation performed better than the sequential implementation on very large datasets. The study also experimented with a few map and reduce side optimization features for the parallel implementation and presented some optimum map and reduce side parameters. Among the map side parameters, the block size and input split size were varied, and among the reduce side parameters, the number of planes were varied, and their effects were studied.


2008 ◽  
Vol 09 (04) ◽  
pp. 455-470 ◽  
Author(s):  
GENG ZHAO ◽  
KEFENG XUAN ◽  
DAVID TANIAR ◽  
BALA SRINIVASAN

Most query search on road networks is either to find objects within a certain range (range search) or to find K nearest neighbors (KNN) on the actual road network map. In this paper, we propose a novel query, that is, incremental k nearest neighbor (iKNN). iKNN can be defined as given a set of candidate interest objects, a query point and the number of objects k, find a path which starts at the query point, goes through k interest objects and the distance of this path is the shortest among all possible paths. This is a new type of query, which can be used when we want to visit k interest objects one by one from the query point. This approach is based on expanding the network from the query point, keeping the results in a query set and updating the query set when reaching network intersection or interest objects. The basic theory of this approach is Dijkstra's algorithm and the Incremental Network Expansion (INE) algorithm. Our experiment verified the applicability of the proposed approach to solve the queries, which involve finding incremental k nearest neighbor.


Author(s):  
Kasturi Chatterjee ◽  
Shu-Ching Chen

This paper proposes a hybrid query refinement model for distance-based index structures supporting content-based image retrievals. The framework refines a query by considering both the low-level feature space as well as the high-level semantic interpretations separately. Thus, it successfully handles queries where the gap between the feature components and the semantics is large. It refines the low-level feature space, indexed by the distance based index structure, in multiple iterations by introducing the concept of multipoint query in a metric space. It refines the high-level semantic space by dynamically adjusting the constructs of a framework, called the Markov Model Mediator (MMM), utilized to introduce the semantic relationships in the index structure. A k-nearest neighbor (k-NN) algorithm is designed to handle similarity searches that refine a query in multiple iterations utilizing the proposed hybrid query refinement model. Extensive experiments are performed demonstrating an increased relevance of query results in subsequent iterations while incurring a low computational overhead. Further, an evaluation metric, called the Model_Score, is proposed to compare the performance of different retrieval frameworks in terms of both computation overhead and query result relevance. This metric enables the users to choose the retrieval framework appropriate for their requirements.


Electronics ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 592
Author(s):  
Lianmeng Jiao ◽  
Xiaojiao Geng ◽  
Quan Pan

The k-nearest neighbor (kNN) rule is one of the most popular classification algorithms applied in many fields because it is very simple to understand and easy to design. However, one of the major problems encountered in using the kNN rule is that all of the training samples are considered equally important in the assignment of the class label to the query pattern. In this paper, an evidential editing version of the kNN rule is developed within the framework of belief function theory. The proposal is composed of two procedures. An evidential editing procedure is first proposed to reassign the original training samples with new labels represented by an evidential membership structure, which provides a general representation model regarding the class membership of the training samples. After editing, a classification procedure specifically designed for evidently edited training samples is developed in the belief function framework to handle the more general situation in which the edited training samples are assigned dependent evidential labels. Three synthetic datasets and six real datasets collected from various fields were used to evaluate the performance of the proposed method. The reported results show that the proposal achieves better performance than other considered kNN-based methods, especially for datasets with high imprecision ratios.


Author(s):  
Kasturi Chatterjee ◽  
Shu-Ching Chen

This paper proposes a hybrid query refinement model for distance-based index structures supporting content-based image retrievals. The framework refines a query by considering both the low-level feature space as well as the high-level semantic interpretations separately. Thus, it successfully handles queries where the gap between the feature components and the semantics is large. It refines the low-level feature space, indexed by the distance based index structure, in multiple iterations by introducing the concept of multipoint query in a metric space. It refines the high-level semantic space by dynamically adjusting the constructs of a framework, called the Markov Model Mediator (MMM), utilized to introduce the semantic relationships in the index structure. A k-nearest neighbor (k-NN) algorithm is designed to handle similarity searches that refine a query in multiple iterations utilizing the proposed hybrid query refinement model. Extensive experiments are performed demonstrating an increased relevance of query results in subsequent iterations while incurring a low computational overhead. Further, an evaluation metric, called the Model_Score, is proposed to compare the performance of different retrieval frameworks in terms of both computation overhead and query result relevance. This metric enables the users to choose the retrieval framework appropriate for their requirements.


2012 ◽  
Vol 8 (2) ◽  
pp. 107-126 ◽  
Author(s):  
Hyung-Ju Cho ◽  
Seung-Kwon Choe ◽  
Tae-Sun Chung

Given two positive parameters k and r, a constrained k-nearest neighbor (CkNN) query returns the k closest objects within a network distance r of the query location in road networks. In terms of the scalability of monitoring these CkNN queries, existing solutions based on central processing at a server suffer from a sudden and sharp rise in server load as well as messaging cost as the number of queries increases. In this paper, we propose a distributed and scalable scheme called DAEMON for the continuous monitoring of CkNN queries in road networks. Our query processing is distributed among clients (query objects) and server. Specifically, the server evaluates CkNN queries issued at intersections of road segments, retrieves the objects on the road segments between neighboring intersections, and sends responses to the query objects. Finally, each client makes its own query result using this server response. As a result, our distributed scheme achieves close-to-optimal communication costs and scales well to large numbers of monitoring queries. Exhaustive experimental results demonstrate that our scheme substantially outperforms its competitor in terms of query processing time and messaging cost.


2012 ◽  
Vol 628 ◽  
pp. 427-432 ◽  
Author(s):  
Ali Khalili Mobarakeh ◽  
Sayedmehran Mirsafaie Rizi ◽  
Saba Nazari ◽  
Jiang Ping Gou ◽  
Bakhtiar Affendi Rosdi

One of the newest methods of identification system is finger vein recognition which is a unique and successful way to identify human based on the physical characteristics of finger vein patterns. In this paper, a new type of classifier called Local Mean based K-nearest centroid neighbor (LMKNCN) is applied to classify finger vein patterns. Finally, the significance of the proposed method is proven by comparing the results of LMKNCN classifier with traditionally used K nearest neighbor classifier (KNN). The experimental results indicate that the proposed method in this research confidently merits the performance of the finger vein recognition method, as the gained accuracy using the proposed method is higher than that of the traditionally used method KNN. The maximum obtained accuracy of LMKNCN test with 2040 number of finger vein images is 100% while for KNN is 98.53%.


2021 ◽  
Author(s):  
Maryam Zand ◽  
Jianhua Ruan

Single-cell RNA sequencing (scRNAseq) offers an unprecedented potential for scrutinizing complex biological systems at single cell resolution. One of the most important applications of scRNAseq is to cluster cells into groups of similar expression profiles, which allows unsupervised identification of novel cell subtypes. While many clustering algorithms have been tested towards this goal, graph-based algorithms appear to be the most effective, due to their ability to accommodate the sparsity of the data, as well as the complex topology of the cell population. An integral part of almost all such clustering methods is the construction of a k-nearest-neighbor (KNN) network, and the choice of k, implicitly or explicitly, can have a profound impact on the density distribution of the graph and the structure of the resulting clusters, as well as the resolution of clusters that one can successfully identify from the data. In this work, we propose a fairly simple but robust approach to estimate the best k for constructing the KNN graph while simultaneously identifying the optimal clustering structure from the graph. Our method, named scQcut, employs a topology-based criterion to guide the construction of KNN graph, and then applies an efficient modularity-based community discovery algorithm to predict robust cell clusters. The results obtained from applying scQcut on a large number of real and synthetic datasets demonstrated that scQcut-which does not require any user-tuned parameters-outperformed several popular state-of-the-art clustering methods in terms of clustering accuracy and the ability to correctly identify rare cell types. The promising results indicate that an accurate approximation of the parameter k, which determines the topology of the network, is a crucial element of a successful graph-based clustering method to recover the final community structure of the cell population.


Author(s):  
M. Jeyanthi ◽  
C. Velayutham

In Science and Technology Development BCI plays a vital role in the field of Research. Classification is a data mining technique used to predict group membership for data instances. Analyses of BCI data are challenging because feature extraction and classification of these data are more difficult as compared with those applied to raw data. In this paper, We extracted features using statistical Haralick features from the raw EEG data . Then the features are Normalized, Binning is used to improve the accuracy of the predictive models by reducing noise and eliminate some irrelevant attributes and then the classification is performed using different classification techniques such as Naïve Bayes, k-nearest neighbor classifier, SVM classifier using BCI dataset. Finally we propose the SVM classification algorithm for the BCI data set.


Sign in / Sign up

Export Citation Format

Share Document