scholarly journals Automatically Prospecting Feature for Queries from Their Search Impact

We recommend that you compile the duplicate lists in the top search engine results to track the aspects of the query and implement a method known as QDMiner. More specifically, QDMiner extracts free text lists, HTML tags and reregions the top search engine results, combining them with groups according to the products they contain, then line up the blocks and products, depending on how the conversation and products are included in the best results. The recommended approach is generic and does not depend on understanding any area. The main purpose of the extraction side differs from the query recommendations. We recommend a structured solution, described as QDMiner, to trace query aspects immediately by removing and grouping repetitive lists in free text results and HTML tags and repeating search engines. We continue to evaluate the support of the list and discover better search queries by looking for exact similarities between menus and penalizing duplicate lists. Experimental results reveal that there are many listings available and QDMiner can find useful queries. The proposed approach is general and does not depend on understanding a particular area. As a result, it can handle opendomain queries. The query supports. Instead of a static system for your problems, we extract the sides of the uploaded document above each query

Author(s):  
Adan Ortiz-Cordova ◽  
Bernard J. Jansen

In this research study, the authors investigate the association between external searching, which is searching on a web search engine, and internal searching, which is searching on a website. They classify 295,571 external – internal searches where each search is composed of a search engine query that is submitted to a web search engine and then one or more subsequent queries submitted to a commercial website by the same user. The authors examine 891,453 queries from all searches, of which 295,571 were external search queries and 595,882 were internal search queries. They algorithmically classify all queries into states, and then clustered the searching episodes into major searching configurations and identify the most commonly occurring search patterns for both external, internal, and external-to-internal searching episodes. The research implications of this study are that external sessions and internal sessions must be considered as part of a continuous search episode and that online businesses can leverage external search information to more effectively target potential consumers.


2014 ◽  
Vol 16 (1) ◽  
Author(s):  
Eugene B. Visser ◽  
Melius Weideman

Background: Most websites, especially those with a commercial orientation, need a high ranking on a search engine for one or more keywords or phrases. The search engine optimisation process attempts to achieve this. Furthermore, website users expect easy navigation, interaction and transactional ability. The application of website usability principles attempts to achieve this. Ideally, designers should achieve both goals when they design websites.Objectives: This research intended to establish a relationship between search engine optimisation and website usability in order to guide the industry. The authors found a discrepancy between the perceived roles of search engines and website usability.Method: The authors designed three test websites. Each had different combinations of usability, visibility and other attributes. They recorded and analysed the conversions and financial spending on these experimental websites. Finally, they designed a model that fuses search engine optimisation and website usability.Results: Initially, it seemed that website usability and search engine optimisation complemented each other. However, some contradictions between the two, based on content, keywords and their presentation, emerged. Industry experts do not acknowledge these contradictions, although they agree on the existence of the individual elements. The new model highlights the complementary and contradictory aspects.Conclusion: The authors found no evidence of any previous empirical experimental results that could confirm or refute the role of the model. In the fast-paced world of competition between commercial websites, this adds value and originality to the websites of organisations whose websites play important roles.


Author(s):  
Shanfeng Zhu ◽  
Xiaotie Deng ◽  
Qizhi Fang ◽  
Weimin Zhang

Web search engines are one of the most popular services to help users find useful information on the Web. Although many studies have been carried out to estimate the size and overlap of the general web search engines, it may not benefit the ordinary web searching users, since they care more about the overlap of the top N (N=10, 20 or 50) search results on concrete queries, but not the overlap of the total index database. In this study, we present experimental results on the comparison of the overlap of the top N (N=10, 20 or 50) search results from AlltheWeb, Google, AltaVista and WiseNut for the 58 most popular queries, as well as for the distance of the overlapped results. These 58 queries are chosen from WordTracker service, which records the most popular queries submitted to some famous metasearch engines, such as MetaCrawler and Dogpile. We divide these 58 queries into three categories for further investigation. Through in-depth study, we observe a number of interesting results: the overlap of the top N results retrieved by different search engines is very small; the search results of the queries in different categories behave in dramatically different ways; Google, on average, has the highest overlap among these four search engines; each search engine tends to adopt a different rank algorithm independently.


2017 ◽  
Vol 26 (06) ◽  
pp. 1730002 ◽  
Author(s):  
T. Dhiliphan Rajkumar ◽  
S. P. Raja ◽  
A. Suruliandi

Short and ambiguous queries are the major problems in search engines which lead to irrelevant information retrieval for the users’ input. The increasing nature of the information on the web also makes various difficulties for the search engine to provide the users needed results. The web search engine experience the ill effects of ambiguity, since the queries are looked at on a rational level rather than the semantic level. In this paper, for improving the performance of search engine as of the users’ interest, personalization is based on the users’ clicks and bookmarking is proposed. Modified agglomerative clustering is used in this work for clustering the results. The experimental results prove that the proposed work scores better precision, recall and F-score.


2019 ◽  
Vol 1 (1) ◽  
Author(s):  
Falah Hassan Ali Al-akashi

Shopping Search Engine (SSE) implies a unique challenge for validating distinct items available online in market place. For sellers, having a user listing appear number one in search results is crucial. Buyers tend to click on and buy from the listings which appear first. Search engine optimization devotes that goal to influence such challenges. In current shopping search platforms, lots of irrelevant itemsretrieved from their indices; e.g. retrieving accessories of exact items rather than retrieving the itemsitself, regardless the price of item were considered or not. In our proposal, we exploit the drawbacks of current shopping search engines. In another side, users tend to move from shoppers to another searching for appropriate items where the time is crucial for consumers. The main goal of this research is to combine and merge multiple search results retrieved from some popular shopping sellers in a listof relevant items. Experimental results showed that our approach is efficient and robust for retrieving acomplete list of desired items with respect to all users‟ query keywords.


Author(s):  
Thomas Nicolai ◽  
Lars Kirchhof ◽  
Axel Bruns ◽  
Jason Wilson ◽  
Barry Saunders

This paper investigates self-Googling through the monitoring of search engine activities of users and adds to the few quantitative studies on this topic already in existence. We explore this phenomenon by answering the following questions: To what extent is the self-Googling visible in the usage of search engines; is any significant difference measurable between queries related to self-Googling and generic search queries; to what extent do self-Googling search requests match the selected personalised Web pages? To address these questions we explore the theory of narcissism in order to help define self-Googling and present the results from a 14-month online experiment using Google search engine usage data.


Author(s):  
Jennifer A. Bandos ◽  
Marc L. Resnick

Internet search engine use is notoriously challenging and frustrating to typical users. Most users are inefficient at finding what they are looking for and often give up before achieving their goals. Most commercial search engines have advanced search interfaces that are designed to facilitate increased precision and recall, but users generally avoid these due to poor usability and high perceived difficulty of use. This paper outlines three studies that investigated the strategies and effectiveness of user-generated search queries. Scenarios were formulated to encourage users to construct the single best query for each task. In general, users were unable to construct accurate queries for tasks that required compound logic. Study 1 verified the reluctance of users to use advanced search features. Study 2 investigated the use of Boolean and proximity search commands and capitalization strategies. Participants generally did not use them properly. Study 3 compared the use of basic search with advanced search when users were forced to use each one. Performance was similar but preference measures showed a significant advantage for basic search while confidence was higher for advanced search.


2008 ◽  
pp. 1926-1937
Author(s):  
Shanfeng Chu ◽  
Xiaotie Deng ◽  
Qizhi Fang ◽  
Weimin Zhang

Web search engines are one of the most popular services to help users find useful information on the Web. Although many studies have been carried out to estimate the size and overlap of the general web search engines, it may not benefit the ordinary web searching users, since they care more about the overlap of the top N (N=10, 20 or 50) search results on concrete queries, but not the overlap of the total index database. In this study, we present experimental results on the comparison of the overlap of the top N (N=10, 20 or 50) search results from AlltheWeb, Google, AltaVista and WiseNut for the 58 most popular queries, as well as for the distance of the overlapped results. These 58 queries are chosen from WordTracker service, which records the most popular queries submitted to some famous metasearch engines, such as MetaCrawler and Dogpile. We divide these 58 queries into three categories for further investigation. Through in-depth study, we observe a number of interesting results: the overlap of the top N results retrieved by different search engines is very small; the search results of the queries in different categories behave in dramatically different ways; Google, on average, has the highest overlap among these four search engines; each search engine tends to adopt a different rank algorithm independently.


2014 ◽  
Vol 2 (2) ◽  
pp. 103-112 ◽  
Author(s):  
Taposh Kumar Neogy ◽  
Harish Paruchuri

The essence of a web page is an inherently predisposed issue, one that is built on behaviors, interests, and intelligence. There are relatively a ton of reasons web pages are critical to the new world, as the matter cannot be overemphasized. The meteoric growth of the internet is one of the most potent factors making it hard for search engines to provide actionable results. With classified directories, search engines store web pages. To store these pages, some of the engines rely on the expertise of real people. Most of them are enabled and classified using automated means but the human factor is dominant in their success. From experimental results, we can deduce that the most effective and critical way to automate web pages for search engines is via the integration of machine learning.  


2019 ◽  
Vol 71 (3) ◽  
pp. 310-324
Author(s):  
Dirk Lewandowski ◽  
Sebastian Sünkler

Purpose The purpose of this paper is to describe a new method to improve the analysis of search engine results by considering the provider level as well as the domain level. This approach is tested by conducting a study using queries on the topic of insurance comparisons. Design/methodology/approach The authors conducted an empirical study that analyses the results of search queries aimed at comparing insurance companies. The authors used a self-developed software system that automatically queries commercial search engines and automatically extracts the content of the returned result pages for further data analysis. The data analysis was carried out using the KNIME Analytics Platform. Findings Google’s top search results are served by only a few providers that frequently appear in these results. The authors show that some providers operate several domains on the same topic and that these domains appear for the same queries in the result lists. Research limitations/implications The authors demonstrate the feasibility of this approach and draw conclusions for further investigations from the empirical study. However, the study is a limited use case based on a limited number of search queries. Originality/value The proposed method allows large-scale analysis of the composition of the top results from commercial search engines. It allows using valid empirical data to determine what users actually see on the search engine result pages.


Sign in / Sign up

Export Citation Format

Share Document