Modelling and simulation of the web graph: evaluating an exponential growth copying model

Author(s):  
Antonios Kogias ◽  
Mara Nikolaidou ◽  
Dimosthenis Anagnostopoulos
2009 ◽  
Vol 6 (3) ◽  
pp. 255-256
Author(s):  
Konstantin Avrachenkov ◽  
Debora Donato ◽  
Nelly Litvak
Keyword(s):  

Author(s):  
K.G. Srinivasa ◽  
Anil Kumar Muppalla ◽  
Varun A. Bharghava ◽  
M. Amulya

In this paper, the authors discuss the MapReduce implementation of crawler, indexer and ranking algorithms in search engines. The proposed algorithms are used in search engines to retrieve results from the World Wide Web. A crawler and an indexer in a MapReduce environment are used to improve the speed of crawling and indexing. The proposed ranking algorithm is an iterative method that makes use of the link structure of the Web and is developed using MapReduce framework to improve the speed of convergence of ranking the WebPages. Categorization is used to retrieve and order the results according to the user choice to personalize the search. A new score is introduced in this paper that is associated with each WebPage and is calculated using user’s query and number of occurrences of the terms in the query in the document corpus. The experiments are conducted on Web graph datasets and the results are compared with the serial versions of crawler, indexer and ranking algorithms.


Author(s):  
Amine Rahmani

The phenomenon of big data (massive data mining) refers to the exponential growth of the volume of data available on the web. This new concept has become widely used in recent years, enabling scalable, efficient, and fast access to data anytime, anywhere, helping the scientific community and companies identify the most subtle behaviors of users. However, big data has its share of the limits of ethical issues and risks that cannot be ignored. Indeed, new risks in terms of privacy are just beginning to be perceived. Sometimes simply annoying, these risks can be really harmful. In the medium term, the issue of privacy could become one of the biggest obstacles to the growth of big data solutions. It is in this context that a great deal of research is under way to enhance security and develop mechanisms for the protection of privacy of users. Although this area is still in its infancy, the list of possibilities continues to grow.


Author(s):  
Amine Rahmani

The phenomenon of big data (massive data mining) refers to the exponential growth of the volume of data available on the web. This new concept has become widely used in recent years, enabling scalable, efficient, and fast access to data anytime, anywhere, helping the scientific community and companies identify the most subtle behaviors of users. However, big data has its share of the limits of ethical issues and risks that cannot be ignored. Indeed, new risks in terms of privacy are just beginning to be perceived. Sometimes simply annoying, these risks can be really harmful. In the medium term, the issue of privacy could become one of the biggest obstacles to the growth of big data solutions. It is in this context that a great deal of research is under way to enhance security and develop mechanisms for the protection of privacy of users. Although this area is still in its infancy, the list of possibilities continues to grow.


Author(s):  
Vadim Zverovich

This chapter gives a brief overview of selected applications of graph theory, many of which gave rise to the development of graph theory itself. A range of such applications extends from puzzles and games to serious scientific and real-life problems, thus illustrating the diversity of applications. The first section is devoted to the six earliest applications of graph theory. The next section introduces so-called scale-free networks, which include the web graph, social and biological networks. The last section describes a number of graph-theoretic algorithms, which can be used to tackle a number of interesting applications and problems of graph theory.


Author(s):  
Leon Sterling ◽  
Kuldar Taveter

Logic programming emerged from the realization that expressing knowledge in an appropriate clausal form in logic was akin to programming. The basic construct of a logic program can be viewed as a rule. This chapter will review rules from a logic programming perspective with an eye to developments within modern rule languages. It mentions rule interpreters, hybrid computing, interaction with the Web, and agents. An extended example is given concerning rule-based modelling and simulation of traffic at airports.


Sign in / Sign up

Export Citation Format

Share Document