A similarity metric designed to speed up, using hardware, the recommender systems k-nearest neighbors algorithm

2013 ◽  
Vol 51 ◽  
pp. 27-34 ◽  
Author(s):  
Jesús Bobadilla ◽  
Fernando Ortega ◽  
Antonio Hernando ◽  
Guillermo Glez-de-Rivera
2018 ◽  
Vol 7 (4.38) ◽  
pp. 213
Author(s):  
Rajesh Kumar Ojha ◽  
Dr. Bhagirathi Nayak

Recommender systems are one of the important methodologies in machine learning technologies, which is using in current business scenario. This article proposes a book recommender system using deep learning technique and k-Nearest Neighbors (k-NN) classification. Deep learning technique is one of the most effective techniques in the field of recommender systems. Recommender systems are intelligent systems in Machine Learning that can make difference from other algorithms. This article considers application of Machine Learning Technology and we present an approach based a recommender system. We used k-Nearest Neighbors classification algorithm of deep learning technique to classify users based book recommender system. We analyze the traditional collaborative filtering with our methodology and also to compare with them. Our outcomes display the projected algorithm is more precise over the existing algorithm, it also consumes less time and reliable than the existing methods.   


2013 ◽  
Vol 3 (2) ◽  
pp. 58-77
Author(s):  
Marlene Goncalves ◽  
Maria-Esther Vidal

Criteria that induce a Skyline naturally represent user's preference conditions useful to discard irrelevant data in large datasets. However, in the presence of high-dimensional Skyline spaces, the size of the Skyline can still be very large, making unfeasible for users to process this set of points. To identify the best points among the Skyline, the Top-k Skyline approach has been proposed. Top-k Skyline uses discriminatory criteria to induce a total order of the points that comprise the Skyline, and recognizes the best or top-k points based on these criteria. In this article the authors model queries as multi-dimensional points that represent bounds of VPT (Vertically Partitioned Table) property values, and datasets as sets of multi-dimensional points; the problem is to locate the k best tuples in the dataset whose distance to the query is minimized. A tuple is among the k best tuples whenever there is not another tuple that is better in all dimensions, and that is closer to the query point, i.e., the k best tuples correspond to the k nearest points to the query that are incomparable or belong to the skyline. The authors name these tuples the k nearest neighbors in the skyline. The authors propose a hybrid approach that combines Skyline and Top-k solutions and develop two algorithms: TKSI and k-NNSkyline. The proposed algorithms identify among the skyline tuples, the k ones with the lowest values of the distance metric, i.e., the k nearest neighbors to the multi-dimensional query that are incomparable. Empirically, we study the performance and quality of TKSI and k-NNSkyline. The authors’ experimental results show the TKSI is able to speed up the computation of the Top-k Skyline in at least 50% percent with respect to the state-of-the-art solutions, whenever k is smaller than the size of the Skyline. Additionally, the authors’ results suggest that k-NNSkyline outperforms existing solutions by up to three orders of magnitude.


2008 ◽  
pp. 3212-3221 ◽  
Author(s):  
Alexandros Nanopoulos ◽  
Apostolos N. Papadopoulos ◽  
Yannis Manolopoulos ◽  
Tatjana Welzer-Druzovec

The existence of noise in the data significantly impacts the accuracy of classification. In this article, we are concerned with the development of novel classification algorithms that can efficiently handle noise. To attain this, we recognize an analogy between k nearest neighbors (kNN) classification and user-based collaborative filtering algorithms, as they both find a neighborhood of similar past data and process its contents to make a prediction about new data. The recent development of item-based collaborative filtering algorithms, which are based on similarities between items instead of transactions, addresses the sensitivity of user-based methods against noise in recommender systems. For this reason, we focus on the item-based paradigm, compared to kNN algorithms, to provide improved robustness against noise for the problem of classification. We propose two new item-based algorithms, which are experimentally evaluated with kNN. Our results show that, in terms of precision, the proposed methods outperform kNN classification by up to 15%, whereas compared to other methods, like the C4.5 system, improvement exceeds 30%.


2015 ◽  
Vol 14 (05) ◽  
pp. 947-970 ◽  
Author(s):  
Jiajin Hunag ◽  
Xi Yuan ◽  
Ning Zhong ◽  
Yiyu Yao

A recommender system aims at recommending items that users might be interested in. With an increasing popularity of social tagging systems, it becomes urgent to model recommendations on users, items, and tags in a unified way. In this paper, we propose a framework for studying recommender systems by modeling user preferences as a relation on (user, item, tag) triples. We discuss tag-aware recommender systems from two aspects. On the one hand, we compute associations between users and items related to tags by using an adaptive method and recommend tags to users or predict item properties for users. On the other hand, by taking the similarity-based recommendation as a case study, we discuss similarity measures from both qualitative and quantitative perspectives and k-nearest neighbors and reverse k-nearest neighbors for recommendations.


2014 ◽  
Vol 18 (4) ◽  
pp. 997-1017 ◽  
Author(s):  
Vreixo Formoso ◽  
Diego Fernández ◽  
Fidel Cacheda ◽  
Victor Carneiro

Mathematics ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 779
Author(s):  
Ruriko Yoshida

A tropical ball is a ball defined by the tropical metric over the tropical projective torus. In this paper we show several properties of tropical balls over the tropical projective torus and also over the space of phylogenetic trees with a given set of leaf labels. Then we discuss its application to the K nearest neighbors (KNN) algorithm, a supervised learning method used to classify a high-dimensional vector into given categories by looking at a ball centered at the vector, which contains K vectors in the space.


Sign in / Sign up

Export Citation Format

Share Document