PageRank: A modified random surfer model

Author(s):  
Boo Vooi Keong ◽  
Patricia Anthony
Keyword(s):  
2002 ◽  
Vol 109 (8) ◽  
pp. 741 ◽  
Author(s):  
Mark Levene ◽  
George Loizou
Keyword(s):  

2017 ◽  
Vol 1 (1) ◽  
Author(s):  
Rima Aprilia ◽  
Rina Filia Sari

Implementation of the PageRank algorithm to rank web generally only contain static and dynamic pages, with the rapid url users then needed an algorithm for calculating web rankings. In determining the ranking of a web, links incoming and outgoing links are also random surfer model is one decisive factor in determining the ranking of a web. Implementation of PageRank on MATLAB formed on a program in the m-file.


Author(s):  
Thomas Largillier ◽  
Sylvain Peyronnet

Search engines use several criteria to rank webpages and choose which pages to display when answering a request. Those criteria can be separated into two notions, relevance and popularity. The notion of popularity is calculated by the search engine and is related to links made to the webpage. Malicious webmasters want to artificially increase their popularity; the techniques they use are often referred to as Webspam. It can take many forms and is in constant evolution, but Webspam usually consists of building a specific dedicated structure of spam pages around a given target page. It is important for a search engine to address the issue of Webspam; otherwise, it cannot provide users with fair and reliable results. In this paper, the authors propose a technique to identify Webspam through the frequency language associated with random walks among those dedicated structures. The authors identify the language by calculating the frequency of appearance of k-grams on random walks launched from every node.


Author(s):  
Florian Geigl ◽  
Simon Walk ◽  
Markus Strohmaier ◽  
Denis Helic
Keyword(s):  

2002 ◽  
Vol 109 (8) ◽  
pp. 741-745 ◽  
Author(s):  
Mark Levene ◽  
George Loizou
Keyword(s):  

Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2437
Author(s):  
Kausthub Keshava ◽  
Alain Jean-Marie ◽  
Sara Alouf

We propose and analyze a model for optimizing the prefetching of documents, in the situation where the connection between documents is discovered progressively. A random surfer moves along the edges of a random tree representing possible sequences of documents, which is known to a controller only up to depth d. A quantity k of documents can be prefetched between two movements. The question is to determine which nodes of the known tree should be prefetched so as to minimize the probability of the surfer moving to a node not prefetched. We analyzed the model with the tools of Markov decision process theory. We formally identified the optimal policy in several situations, and we identified it numerically in others.


Sign in / Sign up

Export Citation Format

Share Document