scholarly journals Optimizing performance of search engines based on user behavior

2018 ◽  
Vol 7 (2.7) ◽  
pp. 359
Author(s):  
Dr Jkr Sastry ◽  
M Sri Harsha Vamsi ◽  
R Srinivas ◽  
G Yeshwanth

WEB clients use the WEB for searching the content that they are looking for through inputting keywords or snippets as input to the search engines. Search Engines follows a process to collect the content and provide the same as output in terms of URL links. One can observe that only 20% of the outputted URLS are of use to the end user. 80% of output is unnecessarily surfed leading to wastage of money and time. Customers have surfing characteristics which can be collected as the user keep surfing. The search process can be made efficient by including the user characteristics / Behaviors as part and parcel of search process. This paper is aimed at improving the search process through integration of the user behavior in indexing and ranking the web pages.

2018 ◽  
Vol 7 (2.7) ◽  
pp. 372
Author(s):  
Dr Jkr Sastry ◽  
Chandu Sai Chittibomma ◽  
Thulasi Manohara Reddy Alla

WEB clients use the WEB for searching the content that they are looking for through inputting keywords or snippets as input to the search engines. Search Engines follows a process to collect the content and provide the same as output in terms of URL links. Sometimes enormous time is taken to fetch the content fetched especially when it goes into number of display pages. Locating the content among the number of pages of URLS displayed is complex. Proper indexing method will help in reducing the number of display pages and enhances the seed of processing and result into reducing the size of index space.In this paper a non-clustered indexing method based on hash based indexing and when the data is stored as a heap file is presented that helps the entire search process quite fast requiring very less storage area. 


2015 ◽  
Vol 12 (1) ◽  
pp. 91-114 ◽  
Author(s):  
Víctor Prieto ◽  
Manuel Álvarez ◽  
Víctor Carneiro ◽  
Fidel Cacheda

Search engines use crawlers to traverse the Web in order to download web pages and build their indexes. Maintaining these indexes up-to-date is an essential task to ensure the quality of search results. However, changes in web pages are unpredictable. Identifying the moment when a web page changes as soon as possible and with minimal computational cost is a major challenge. In this article we present the Web Change Detection system that, in a best case scenario, is capable to detect, almost in real time, when a web page changes. In a worst case scenario, it will require, on average, 12 minutes to detect a change on a low PageRank web site and about one minute on a web site with high PageRank. Meanwhile, current search engines require more than a day, on average, to detect a modification in a web page (in both cases).


2019 ◽  
Author(s):  
Lucas van der Deijl ◽  
Antal van den Bosch ◽  
Roel Smeets

Literary history is no longer written in books alone. As literary reception thrives in blogs, Wikipedia entries, Amazon reviews, and Goodreads pro les, the Web has become a key platform for the exchange of information on literature. Al- though conventional printed media in the eld—academic monographs, literary supplements, and magazines—may still claim the highest authority, online me- dia presumably provide the rst (and possibly the only) source for many readers casually interested in literary history. Wikipedia o ers quick and free answers to readers’ questions and the range of topics described in its entries dramatically exceeds the volume any printed encyclopedia could possibly cover. While an important share of this expanding knowledge base about literature is produced bottom-up (user based and crowd-sourced), search engines such as Google have become brokers in this online economy of knowledge, organizing information on the Web for its users. Similar to the printed literary histories, search engines prioritize certain information sources over others when ranking and sorting Web pages; as such, their search algorithms create hierarchies of books, authors, and periods.


2020 ◽  
Vol 28 (3) ◽  
pp. 81-91
Author(s):  
Tetyana S. Dronova ◽  
Yana Y. Trygub

Purpose – to study website’s work and content of the travel agency on the example of the "Laspi" travel agency, identify its technical properties and offer methods to increase the web-resource leading position in the Yandex and Google search engines by performing SEO-analysis. Design/Method/Research approach. Internet resources SEO-analysis. Findings.The travel product promotion directly depends on the travel market participants' advertising tools' effectiveness, mainly travel agents. It is determined that one of the new technologies that increase the advertising effectiveness, in particular via the travel agencies’ web resources, is SEO-technology. The authors Identified technical shortcomings of its operation, mainly related to search queries statistics, the subject site visits, the semantic core operation, the site improvement, the site increasing citation, and the number of persistent references in the network. It is proved that updating site development, changing its environment, analyzing user behavior, namely the Og Properties micro markup, updating HTML tags, analytical programs placing, iframe objects selection, and other activities, increase the content uniqueness. As a result, search engines scanned the site, and the search results took first place for the positions essential for the web resource. Originality/Value. The leading positions increasing mechanism application, website operation optimization allow search engines to bring it to the TOP of the most popular travel sites. Theoretical implications. To optimize the web resource operation, a mechanism for improving its leading position is proposed that includes three steps: the general website characteristics of marketing, SEO-analysis, recommendations provision. Practical implications. The research is practical in improving the site’s technical operation and increasing its leading position in Yandex and Google search engines. Research limitations/Future research. Further research aims at the site further analysis after making the proposed changes to its operation. Paper type – empirical.  


2013 ◽  
Vol 303-306 ◽  
pp. 2311-2316
Author(s):  
Hong Shen Liu ◽  
Peng Fei Wang

The structures and contents of researching search engines are presented and the core technology is the analysis technology of web pages. The characteristic of analyzing web pages in one website is studied, relations between the web pages web crawler gained at two times are able to be obtained and the changed information among them are found easily. A new method of analyzing web pages in one website is introduced and the method analyzes web pages with the changed information of web pages. The result of applying the method shows that the new method is effective in the analysis of web pages.


A web crawler is also called spider. For the intention of web indexing it automatically searches on the WWW. As the W3 is increasing day by day, globally the number of web pages grown massively. To make the search sociable for users, searching engine are mandatory. So to discover the particular data from the WWW search engines are operated. It would be almost challenging for mankind devoid of search engines to find anything from the web unless and until he identifies a particular URL address. A central depository of HTML documents in indexed form is sustained by every search Engine. Every time an operator gives the inquiry, searching is done at the database of indexed web pages. The size of a database of every search engine depends on the existing page on the internet. So to increase the proficiency of search engines, it is permitted to store only the most relevant and significant pages in the database.


Author(s):  
K. Selvakuberan ◽  
M. Indra Devi ◽  
R. Rajaram

The explosive growth of the Web makes it a very useful information resource to all types of users. Today, everyone accesses the Internet for various purposes and retrieving the required information within the stipulated time is the major demand from users. Also, the Internet provides millions of Web pages for each and every search term. Getting interesting and required results from the Web becomes very difficult and turning the classification of Web pages into relevant categories is the current research topic. Web page classification is the current research problem that focuses on classifying the documents into different categories, which are used by search engines for producing the result. In this chapter we focus on different machine learning techniques and how Web pages can be classified using these machine learning techniques. The automatic classification of Web pages using machine learning techniques is the most efficient way used by search engines to provide accurate results to the users. Machine learning classifiers may also be trained to preserve the personal details from unauthenticated users and for privacy preserving data mining.


2012 ◽  
pp. 50-65 ◽  
Author(s):  
K. Selvakuberan ◽  
M. Indra Devi ◽  
R. Rajaram

The explosive growth of the Web makes it a very useful information resource to all types of users. Today, everyone accesses the Internet for various purposes and retrieving the required information within the stipulated time is the major demand from users. Also, the Internet provides millions of Web pages for each and every search term. Getting interesting and required results from the Web becomes very difficult and turning the classification of Web pages into relevant categories is the current research topic. Web page classification is the current research problem that focuses on classifying the documents into different categories, which are used by search engines for producing the result. In this chapter we focus on different machine learning techniques and how Web pages can be classified using these machine learning techniques. The automatic classification of Web pages using machine learning techniques is the most efficient way used by search engines to provide accurate results to the users. Machine learning classifiers may also be trained to preserve the personal details from unauthenticated users and for privacy preserving data mining.


Author(s):  
Aki Vainio ◽  
Kimmo Salmenjoki

Information content of the Web has, in the last 10 years, changed from informative to communicative. Web pages, especially homepages, were the foremost places where companies, organizations, and individuals alike expressed their existence online and provided some information about themselves, like their products, services, or artefacts that they related to. On the common Web environment, the search engines were harvesting this information and made it available and meaningful for the masses of Web users. In the early days of Web, this factor alone justified the usage of Web as a marketing tool and as an easy way to share important information between collaborating partners.


2007 ◽  
Vol 17 (07) ◽  
pp. 2355-2361 ◽  
Author(s):  
MASSIMO MARCHIORI

The web landscape has undergone massive changes in the past years. On the other hand, search engine technology has not quite kept the same pace. In this article we look at the current scenarios, and argue how social flows can be used to make up for a better generation of search engines. We consider how society and technological progress somehow changed the rules of the game, introducing good but also bad components, and see how this situation could be modeled by search engines. Along this line of thinking, we show how the real components of interest are not just web pages, but flows of information of any kind, that need to be merged: this opens up for a wide range of improvements and far-looking developments, towards a new horizon of social search.


Sign in / Sign up

Export Citation Format

Share Document