Collaborative Recommendation Systems and Link Analysis

Author(s):  
François Fouss

Link analysis is a framework usually associated with fields such as graph mining, relational learning, Web mining, text mining, hyper-text mining, visualization of link structures. It provides and analyzes relationships and associations between many objects of various types that are not apparent from isolated pieces of information. This chapter shows how to apply various link-analysis algorithms exploiting the graph structure of databases on collaborative-recommendation tasks. More precisely, two kinds of link-analysis algorithms are applied to recommend items to users: random-walk based models and kernel-based models. These link-analysis based algorithms do not use any feature of the items in order to compute the recommendations, they first compute a matrix containing the links between persons and items, and then derive recommendations from this matrix or part of it.

2013 ◽  
pp. 446-464 ◽  
Author(s):  
Ana Paula Appel ◽  
Christos Faloutsos ◽  
Caetano Traina Junior

Graphs appear in several settings, like social networks, recommendation systems, computer communication networks, gene/protein biological networks, among others. A large amount of graph patterns, as well as graph generator models that mimic such patterns have been proposed over the last years. However, a deep and recurring question still remains: “What is a good pattern?” The answer is related to finding a pattern or a tool able to help distinguishing between actual real-world and fake graphs. Here we explore the ability of ShatterPlots, a simple and powerful algorithm to tease out patterns of real graphs, helping us to spot fake/masked graphs. The idea is to force a graph to reach a critical (“Shattering”) point, randomly deleting edges, and study its properties at that point.


Author(s):  
Ana Paula Appel ◽  
Christos Faloutsos ◽  
Caetano Traina Junior

Graphs appear in several settings, like social networks, recommendation systems, computer communication networks, gene/protein biological networks, among others. A large amount of graph patterns, as well as graph generator models that mimic such patterns have been proposed over the last years. However, a deep and recurring question still remains: “What is a good pattern?” The answer is related to finding a pattern or a tool able to help distinguishing between actual real-world and fake graphs. Here we explore the ability of ShatterPlots, a simple and powerful algorithm to tease out patterns of real graphs, helping us to spot fake/masked graphs. The idea is to force a graph to reach a critical (“Shattering”) point, randomly deleting edges, and study its properties at that point.


2021 ◽  
pp. 1-17
Author(s):  
Fátima Leal ◽  
Bruno Veloso ◽  
Benedita Malheiro ◽  
Juan Carlos Burguillo ◽  
Adriana E. Chis ◽  
...  

Explainable recommendations enable users to understand why certain items are suggested and, ultimately, nurture system transparency, trustworthiness, and confidence. Large crowdsourcing recommendation systems ought to crucially promote authenticity and transparency of recommendations. To address such challenge, this paper proposes the use of stream-based explainable recommendations via blockchain profiling. Our contribution relies on chained historical data to improve the quality and transparency of online collaborative recommendation filters – Memory-based and Model-based – using, as use cases, data streamed from two large tourism crowdsourcing platforms, namely Expedia and TripAdvisor. Building historical trust-based models of raters, our method is implemented as an external module and integrated with the collaborative filter through a post-recommendation component. The inter-user trust profiling history, traceability and authenticity are ensured by blockchain, since these profiles are stored as a smart contract in a private Ethereum network. Our empirical evaluation with HotelExpedia and Tripadvisor has consistently shown the positive impact of blockchain-based profiling on the quality (measured as recall) and transparency (determined via explanations) of recommendations.


Author(s):  
Seyyed Mohammadreza Rahimi ◽  
Rodrigo Augusto de Oliveira e Silva ◽  
Behrouz Far ◽  
Xin Wang

2013 ◽  
Vol 49 (3) ◽  
pp. 688-697 ◽  
Author(s):  
Ismail Sengor Altingovde ◽  
Özlem Nurcan Subakan ◽  
Özgür Ulusoy

2020 ◽  
Vol 5 (4) ◽  
pp. 43-55
Author(s):  
Gianpiero Bianchi ◽  
Renato Bruni ◽  
Cinzia Daraio ◽  
Antonio Laureti Palma ◽  
Giulio Perani ◽  
...  

AbstractPurposeThe main objective of this work is to show the potentialities of recently developed approaches for automatic knowledge extraction directly from the universities’ websites. The information automatically extracted can be potentially updated with a frequency higher than once per year, and be safe from manipulations or misinterpretations. Moreover, this approach allows us flexibility in collecting indicators about the efficiency of universities’ websites and their effectiveness in disseminating key contents. These new indicators can complement traditional indicators of scientific research (e.g. number of articles and number of citations) and teaching (e.g. number of students and graduates) by introducing further dimensions to allow new insights for “profiling” the analyzed universities.Design/methodology/approachWebometrics relies on web mining methods and techniques to perform quantitative analyses of the web. This study implements an advanced application of the webometric approach, exploiting all the three categories of web mining: web content mining; web structure mining; web usage mining. The information to compute our indicators has been extracted from the universities’ websites by using web scraping and text mining techniques. The scraped information has been stored in a NoSQL DB according to a semi-structured form to allow for retrieving information efficiently by text mining techniques. This provides increased flexibility in the design of new indicators, opening the door to new types of analyses. Some data have also been collected by means of batch interrogations of search engines (Bing, www.bing.com) or from a leading provider of Web analytics (SimilarWeb, http://www.similarweb.com). The information extracted from the Web has been combined with the University structural information taken from the European Tertiary Education Register (https://eter.joanneum.at/#/home), a database collecting information on Higher Education Institutions (HEIs) at European level. All the above was used to perform a clusterization of 79 Italian universities based on structural and digital indicators.FindingsThe main findings of this study concern the evaluation of the potential in digitalization of universities, in particular by presenting techniques for the automatic extraction of information from the web to build indicators of quality and impact of universities’ websites. These indicators can complement traditional indicators and can be used to identify groups of universities with common features using clustering techniques working with the above indicators.Research limitationsThe results reported in this study refers to Italian universities only, but the approach could be extended to other university systems abroad.Practical implicationsThe approach proposed in this study and its illustration on Italian universities show the usefulness of recently introduced automatic data extraction and web scraping approaches and its practical relevance for characterizing and profiling the activities of universities on the basis of their websites. The approach could be applied to other university systems.Originality/valueThis work applies for the first time to university websites some recently introduced techniques for automatic knowledge extraction based on web scraping, optical character recognition and nontrivial text mining operations (Bruni & Bianchi, 2020).


Author(s):  
Andreas Aresti ◽  
Penelope Markellou ◽  
Ioanna Mousourouli ◽  
Spiros Sirmakessis ◽  
Athanasios Tsakalidis

Recommendation systems are special personalization tools that help users to find interesting information and services in complex online shops. Even though today’s e-commerce environments have drastically evolved and now incorporate techniques from other domains and application areas such as Web mining, semantics, artificial intelligence, user modeling, and profiling setting up a successful recommendation system is not a trivial or straightforward task. This chapter argues that by monitoring, analyzing, and understanding the behavior of customers, their demographics, opinions, preferences, and history, as well as taking into consideration the specific e-shop ontology and by applying Web mining techniques, the effectiveness of produced recommendations can be significantly improved. In this way, the e-shop may upgrade users’ interaction, increase its usability, convert users to buyers, retain current customers, and establish long-term and loyal one-to-one relationships.


Sign in / Sign up

Export Citation Format

Share Document