A Novel Heuristic PageRank Algorithm in Web Search

2011 ◽  
Vol 216 ◽  
pp. 747-751
Author(s):  
Yan Li He

With the booming development of the Internet, web search engines have become the most important Internet tools for retrieving information. PageRank computes the principal eigenvector of the matrix describing the hyperlinks in the web using the famous power method. Based on empirical distributions of Web page degrees, we derived analytically the probability distribution for the PageRank metric. We found out that it follows the familiar inverse polynomial law reported for Web page degrees.

2014 ◽  
Vol 1042 ◽  
pp. 203-206
Author(s):  
Cong Liu ◽  
Jing Chang Pan ◽  
Guo Zhou Ge

This paper implemented a generator based on BRITE with some development and optimization. Experiments shows that TANG is better than BA for the second stage of BRITE. The position where the nodes allowed to be placed is also limited by the new rule. A new output format of GEXF is also implemented for the new generator. In addition, the matrix and the sparse matrix is also supported. This paper also compared the web page link topology and the Internet topology. Experiments show that these two types of topology have some similar characteristics in complex network theory.


Author(s):  
John DiMarco

Web authoring is the process of developing Web pages. The Web development process requires you to use software to create functional pages that will work on the Internet. Adding Web functionality is creating specific components within a Web page that do something. Adding links, rollover graphics, and interactive multimedia items to a Web page creates are examples of enhanced functionality. This chapter demonstrates Web based authoring techniques using Macromedia Dreamweaver. The focus is on adding Web functions to pages generated from Macromedia Fireworks and to overview creating Web pages from scratch using Dreamweaver. Dreamweaver and Fireworks are professional Web applications. Using professional Web software will benefit you tremendously. There are other ways to create Web pages using applications not specifically made to create Web pages. These applications include Microsoft Word and Microsoft PowerPoint. The use of Microsoft applications for Web page development is not covered in this chapter. However, I do provide steps on how to use these applications for Web page authoring within the appendix of this text. If you feel that you are more comfortable using the Microsoft applications or the Macromedia applications simply aren’t available to you yet, follow the same process for Web page conceptualization and content creation and use the programs available to you. You should try to get Web page development skills using Macromedia Dreamweaver because it helps you expand your software skills outside of basic office applications. The ability to create a Web page using professional Web development software is important to building a high-end computer skills set. The main objectives of this chapter are to get you involved in some technical processes that you’ll need to create the Web portfolio. Focus will be on guiding you through opening your sliced pages, adding links, using tables, creating pop up windows for content and using layers and timelines for dynamic HTML. The coverage will not try to provide a complete tutorial set for Macromedia Dreamweaver, but will highlight essential techniques. Along the way you will get pieces of hand coded action scripts and JavaScripts. You can decide which pieces you want to use in your own Web portfolio pages. The techniques provided are a concentrated workflow for creating Web pages. Let us begin to explore Web page authoring.


2001 ◽  
Vol 6 (2) ◽  
pp. 107-110 ◽  
Author(s):  
John P. Young

This paper describes an exploration of utilising the World Wide Web for interactive music. The origin of this investigation was the intermedia work Telemusic #1, by Randall Packer, which combined live performers with live public participation via the Web. During the event, visitors to the site navigated through a virtual interface, and while manipulating elements, projected their actions in the form of triggered sounds into the physical space. Simultaneously, the live audio performance was streamed back out to the Internet participants. Thus, anyone could take part in the collective realisation of the work and hear the musical results in real time. The underlying technology is, to our knowledge, the first standards-based implementation linking the Web with Cycling '74 MAX. Using only ECMAScript/JavaScript, Java, and the OTUDP external from UC Berkeley CNMAT, virtually any conceivable interaction with a Web page can send data to a MAX patch for processing. The code can also be readily adapted to work with Pd, jMAX and other network-enabled applications.


Author(s):  
GAURAV AGARWAL ◽  
SACHI GUPTA ◽  
SAURABH MUKHERJEE

Today, web servers, are the key repositories of the information & internet is the source of getting this information. There is a mammoth data on the Internet. It becomes a difficult job to search out the accordant data. Search Engine plays a vital role in searching the accordant data. A search engine follows these steps: Web crawling by crawler, Indexing by Indexer and Searching by Searcher. Web crawler retrieves information of the web pages by following every link on the site. Which is stored by web search engine then the content of the web page is indexed by the indexer. The main role of indexer is how data can be catch soon as per user requirements. As the client gives a query, Search Engine searches the results corresponding to this query to provide excellent output. Here ambition is to enroot an algorithm for search engine which may response most desirable result as per user requirement. In this a ranking method is used by the search engine to rank the web pages. Various ranking approaches are discussed in literature but in this paper, ranking algorithm is proposed which is based on parent-child relationship. Proposed ranking algorithm is based on priority assignment phase of Heterogeneous Earliest Finish Time (HEFT) Algorithm which is designed for multiprocessor task scheduling. Proposed algorithm works on three on range variable its means the density of keywords, number of successors to the nodes and the age of the web page. Density shows the occurrence of the keyword on the particular web page. Numbers of successors represent the outgoing link to a single web page. Age is the freshness value of the web page. The page which is modified recently is the freshest page and having the smallest age or largest freshness value. Proposed Technique requires that the priorities of each page to be set with the downward rank values & pages are arranged in ascending/ Descending order of their rank values. Experiments show that our algorithm is valuable. After the comparison with Google we find that our Algorithm is performing better. For 70% problems our algorithm is working better than Google.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 2011-2016

With the boom in the number of internet pages, it is very hard to discover desired records effortlessly and fast out of heaps of web pages retrieved with the aid of a search engine. there may be a increasing requirement for automatic type strategies with more class accuracy. There are a few conditions these days in which it's far vital to have an green and reliable classification of a web-web page from the information contained within the URL (Uniform aid Locator) handiest, with out the want to go to the web page itself. We want to understand if the URL can be used by us while not having to look and visit the page due to numerous motives. Getting the web page content material and sorting them to discover the genre of the net web page is very time ingesting and calls for the consumer to recognize the shape of the web page which needs to be categorised. To avoid this time-eating technique we proposed an exchange method so one can help us get the genre of the entered URL based of the entered URL and the metadata i.e., description, keywords used in the website along side the title of the web site. This approach does not most effective rely upon URL however also content from the internet application. The proposed gadget can be evaluated using numerous available datasets.


Author(s):  
Rahul Pradhan ◽  
Dilip Kumar Sharma

Users issuing query on search engine, expect results to more relevant to query topic rather than just the textual match with text in query. Studies conducted by few researchers shows that user want the search engine to understand the implicit intent of query rather than looking the textual match in hypertext structure of document or web page. In this paper the authors will be addressing queries that have any temporal intent and help the web search engines to classify them in certain categories. These classes or categories will help search engine to understand and cater the need of query. The authors will consider temporal expression (e.g. 1943) in document and categories them on the basis of temporal boundary of that query. Their experiment classifies the query and tries to suggest further course of action for search engines. Results shows that classifying the query to these classes will help user to reach his/her seeking information faster.


2012 ◽  
pp. 239-273
Author(s):  
Sarah Vert

This chapter focuses on the Internet working environment of Knowledge Workers through the customization of the Web browser on their computer. Given that a Web browser is designed to be used by anyone browsing the Internet, its initial configuration must meet generic needs such as reading a Web page, searching for information, and bookmarking. In the absence of a universal solution that meets the specific needs of each user, browser developers offer additional programs known as extensions, or add-ons. Among the various browsers that can be modified with add-ons, Mozilla’s Firefox is perhaps the one that first springs to mind; indeed, Mozilla has built the Firefox brand around these extensions. Using this example, and also considering the browsers Google Chrome, Internet Explorer, Opera and Safari, the author will attempt to demonstrate the potential of Web browsers in terms of the resources they can offer when they are customizable and available within the working environment of a Knowledge Worker.


2005 ◽  
Vol 5 (3) ◽  
pp. 255-268 ◽  
Author(s):  
Russell Williams ◽  
Rulzion Rattray

Organisations increasingly use the internet and web to communicate with the marketplace. Indeed, the hotel industry seems particularly suited to the use of these technologies. Many sites are not accessible to large segments of the disabled community, however, or to individuals using particular hard and softwares. Identifying the competitive and legal mandates for website accessibility, the study looks at the accessibility of UK-based hotel websites. Utilising the accessibility software, Bobby, as well as making some additional manual accessibility checks, the study finds disappointingly low levels of website accessibility. If organisations want to make more effective use of the web then they need to ensure that their web pages are designed from the outside-in — from the user's perspective.


2018 ◽  
Vol 7 (1.7) ◽  
pp. 91
Author(s):  
L LeemaPriyadharshini ◽  
S Florence ◽  
K Prema ◽  
C Shyamala Kumari

Search engines provide ranked information based on the query given by the user. Understanding user search behavior is an important task for satisfaction of the users with the needed information. Understanding user search behaviors and recommending more information or more sites to the user is an emerging task. The work is based on the queries given by the user, the amount of time the user spending on the particular page, the number of clicks done by the user particular URL. These details will be available in the dataset of web search log. The web search log is nothing but the log which contains the user searching activities and other details like machine ID, browser ID, timestamp, query given by the user, URL accessed etc., four things considered as the important: 1) Extraction of tasks from the sequence of queries given by the user 2) suggesting some similar query to the user 3) ranking URLs based on the implicit user behaviors 4) increasing web page utilities based on the implicit behaviors. For increasing the web page utility and ranking the URLs predicting implicit user behavior is a needed task. For each of these four things designing and implementation of some algorithms and techniques are needed to increase the efficiency and effectiveness.


2013 ◽  
Vol 10 (9) ◽  
pp. 1969-1976
Author(s):  
Sathya Bama ◽  
M.S.Irfan Ahmed ◽  
A. Saravanan

The growth of internet is increasing continuously by which the need for improving the quality of services has been increased. Web mining is a research area which applies data mining techniques to address all this need. With billions of pages on the web it is very intricate task for the search engines to provide the relevant information to the users. Web structure mining plays a vital role by ranking the web pages based on user query which is the most essential attempt of the web search engines. PageRank, Weighted PageRank and HITS are the commonly used algorithm in web structure mining for ranking the web page. But all these algorithms treat all links equally when distributing initial rank scores. In this paper, an improved page rank algorithm is introduced. The result shows that the algorithm has better performance over PageRank algorithm.


Sign in / Sign up

Export Citation Format

Share Document