The Perspectives of Improving Web Search Engine Quality

Author(s):  
Jengchung V. Chen ◽  
Wen-Hsiang Lu ◽  
Kuan-Yu He ◽  
Yao-Sheng Chang

With the fast growth of the Web, users often suffer from the problem of information overload, since many existing search engines respond to queries with many nonrelevant documents containing query terms based on the conventional search mechanism of keyword matching. In fact, both users and search engine developers had anticipated that this mechanism would reduce information overload by understanding user goals clearly. In this chapter, we will introduce some past research in Web search, and current trends focusing on how to improve the search quality in different perspectives of “what”, “how”, “where”, “when”, and “why”. Additionally, we will also briefly introduce some effective search quality improvements using link-structure-based search algorithms, such as PageRank and HITS. At the end of this chapter, we will introduce the idea of our proposed approach to improving search quality, which employs syntactic structures (verb-object pairs) to automatically identify potential user goals from search-result snippets. We also believe that understanding user goals more clearly and reducing information overload will become one of the major developments in commercial search engines in the future, since the amounts of information and resources continue to increase rapidly, and user needs will become more and more diverse.

Author(s):  
Aslihan Nasir ◽  
Süphan Nasir

Today, as business becomes ever more challenging, brands become the main assets of many companies. Fierce competition forces companies to differentiate their products from those of competitors in the market. However, it is extremely difficult to create this differentiation based on the functionality attribute of the products, since advanced technology makes it possible for companies to imitate the functionality attributes. Hence, marketers begin to create personalities for their brands in order to be more appealing to the consumers. Brand personality is defined as “the set of human characteristics associated with a brand” and it is asserted that the brand personality leads to differentiation in terms of consumer perceptions and preferences. At the moment, millions of people use search engines in order to reach the most relevant information. Since search engines, as the senior actors of the online world, provide similar services, it is enormously crucial for them to create differentiation. Google is the dominant search engine brand and, in this paper, its success has been examined by utilizing the brand personality scale of Aaker (1997). This study tries to identify the brand personality dimensions that search engine companies create in the minds of Internet users by using past research on brand personality scales as a guide. Furthermore, it is also aims to determine the distinct brand personality dimensions of Google as the most preferred and used search engine. It is found that Google has been perceived as the most “competent” search engine brand. Furthermore, depending on the MANOVA results, it is shown that all three search engines have statistically significant differences only on the “competence” dimension. “Sincerity” and “excitement” are the other two dimensions which significantly differentiate Google from both MSN and Yahoo.


2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


Author(s):  
Adan Ortiz-Cordova ◽  
Bernard J. Jansen

In this research study, the authors investigate the association between external searching, which is searching on a web search engine, and internal searching, which is searching on a website. They classify 295,571 external – internal searches where each search is composed of a search engine query that is submitted to a web search engine and then one or more subsequent queries submitted to a commercial website by the same user. The authors examine 891,453 queries from all searches, of which 295,571 were external search queries and 595,882 were internal search queries. They algorithmically classify all queries into states, and then clustered the searching episodes into major searching configurations and identify the most commonly occurring search patterns for both external, internal, and external-to-internal searching episodes. The research implications of this study are that external sessions and internal sessions must be considered as part of a continuous search episode and that online businesses can leverage external search information to more effectively target potential consumers.


Author(s):  
Xiannong Meng

This chapter surveys various technologies involved in a Web search engine with an emphasis on performance analysis issues. The aspects of a general-purpose search engine covered in this survey include system architectures, information retrieval theories as the basis of Web search, indexing and ranking of Web documents, relevance feedback and machine learning, personalization, and performance measurements. The objectives of the chapter are to review the theories and technologies pertaining to Web search, and help us understand how Web search engines work and how to use the search engines more effectively and efficiently.


2011 ◽  
Vol 10 (05) ◽  
pp. 913-931 ◽  
Author(s):  
XIANYONG FANG ◽  
CHRISTIAN JACQUEMIN ◽  
FRÉDÉRIC VERNIER

Since the results from Semantic Web search engines are highly structured XML documents, they cannot be efficiently visualized with traditional explorers. Therefore, the Semantic Web calls for a new generation of search query visualizers that can rely on document metadata. This paper introduces such a visualization system called WebContent Visualizer that is used to display and browse search engine results. The visualization is organized into three levels: (1) Carousels contain documents with the same ranking, (2) carousels are piled into stacks, one for each date, and (3) these stacks are organized along a meta-carousel to display the results for several dates. Carousel stacks are piles of local carousels with increasing radii to visualize the ranks of classes. For document comparison, colored links connect documents between neighboring classes on the basis of shared entities. Based on these techniques, the interface is made of three collaborative components: an inspector window, a visualization panel, and a detailed dialog component. With this architecture, the system is intended to offer an efficient way to explore the results returned by Semantic Web search engines.


2019 ◽  
Vol 49 (5) ◽  
pp. 707-731 ◽  
Author(s):  
Malte Ziewitz

When measures come to matter, those measured find themselves in a precarious situation. On the one hand, they have a strong incentive to respond to measurement so as to score a favourable rating. On the other hand, too much of an adjustment runs the risk of being flagged and penalized by system operators as an attempt to ‘game the system’. Measures, the story goes, are most useful when they depict those measured as they usually are and not how they intend to be. In this article, I explore the practices and politics of optimization in the case of web search engines. Drawing on materials from ethnographic fieldwork with search engine optimization (SEO) consultants in the United Kingdom, I show how maximizing a website’s visibility in search results involves navigating the shifting boundaries between ‘good’ and ‘bad’ optimization. Specifically, I am interested in the ethical work performed as SEO consultants artfully arrange themselves to cope with moral ambiguities provoked and delegated by the operators of the search engine. Building on studies of ethics as a practical accomplishment, I suggest that the ethicality of optimization has itself become a site of governance and contestation. Studying such practices of ‘being ethical’ not only offers opportunities for rethinking popular tropes like ‘gaming the system’, but also draws attention to often-overlooked struggles for authority at the margins of contemporary ranking schemes.


Author(s):  
Michael Zimmer

Web search engines have emerged as a ubiquitous and vital tool for the successful navigation of the growing online informational sphere. As Google puts it, the goal is to "organize the world's information and make it universally accessible and useful" and to create the "perfect search engine" that provides only intuitive, personalized, and relevant results. Meanwhile, the so-called Web 2.0 phenomenon has blossomed based, largely, on the faith in the power of the networked masses to capture, process, and mashup one's personal information flows in order to make them more useful, social, and meaningful. The (inevitable) combining of Google's suite of information-seeking products with Web 2.0 infrastructures -- what I call Search 2.0 -- intends to capture the best of both technical systems for the touted benefit of users. By capturing the information flowing across Web 2.0, search engines can better predict users' needs and wants, and deliver more relevant and meaningful results. While intended to enhance mobility in the online sphere, this paper argues that the drive for Search 2.0 necessarily requires the widespread monitoring and aggregation of a users' online personal and intellectual activities, bringing with it particular externalities, such as threats to informational privacy while online.


2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

In the growing world of technology, where everything is available in just one click, the user expectations has increased with time. In the era of Search Engines, where Google, Yahoo are providing the facility to search through text and voice and image , it has become a complex work to handle all the operations and lot more of data storage is needed. It is also a time consuming process. In the proposed Image retrieval Search Engine, the user enters the queried image and that image is being matched with the template images . The proposed approach takes the input image with 15% accuracy to 100% accuracy to retrieve the intended image by the user. But it is found that due to the efficiency of the applied algorithm, in all cases, the retrieved images are with the same accuracy irrespective of the input query image accuracy. This implementation is very much useful in the fields of forensic, defense and diagnostics system in medical field etc. .


2017 ◽  
Author(s):  
Xi Zhu ◽  
Xiangmiao Qiu ◽  
Dingwang Wu ◽  
Shidong Chen ◽  
Jiwen Xiong ◽  
...  

BACKGROUND All electronic health practices like app/software are involved in web search engine due to its convenience for receiving information. The success of electronic health has link with the success of web search engines in field of health. Yet information reliability from search engine results remains to be evaluated. A detail analysis can find out setbacks and bring inspiration. OBJECTIVE Find out reliability of women epilepsy related information from the searching results of main search engines in China. METHODS Six physicians conducted the search work every week. Search key words are one kind of AEDs (valproate acid/oxcarbazepine/levetiracetam/ lamotrigine) plus "huaiyun"/"renshen", both of which means pregnancy in Chinese. The search were conducted in different devices (computer/cellphone), different engines (Baidu/Sogou/360). Top ten results of every search result page were included. Two physicians classified every results into 9 categories according to their contents and also evaluated the reliability. RESULTS A total of 16411 searching results were included. 85.1% of web pages were with advertisement. 55% were categorized into question and answers according to their contents. Only 9% of the searching results are reliable, 50.7% are partly reliable, 40.3% unreliable. With the ranking of the searching results higher, advertisement up and the proportion of those unreliable increase. All contents from hospital websites are unreliable at all and all from academic publishing are reliable. CONCLUSIONS Several first principles must be emphasized to further the use of web search engines in field of healthcare. First, identification of registered physicians and development of an efficient system to guide the patients to physicians guarantee the quality of information provided. Second, corresponding department should restrict the excessive advertisement sale trades in healthcare area by specific regulations to avoid negative impact on patients. Third, information from hospital websites should be carefully judged before embracing them wholeheartedly.


2016 ◽  
Vol 11 (3) ◽  
pp. 108
Author(s):  
Simon Briscoe

A Review of: Eysenbach, G., Tuische, J. & Diepgen, T.L. (2001). Evaluation of the usefulness of Internet searches to identify unpublished clinical trials for systematic reviews. Medical Informatics and the Internet in Medicine, 26(3), 203-218. http://dx.doi.org/10.1080/14639230110075459 Objective – To consider whether web searching is a useful method for identifying unpublished studies for inclusion in systematic reviews. Design – Retrospective web searches using the AltaVista search engine were conducted to identify unpublished studies – specifically, clinical trials – for systematic reviews which did not use a web search engine. Setting – The Department of Clinical Social Medicine, University of Heidelberg, Germany. Subjects – n/a Methods – Pilot testing of 11 web search engines was carried out to determine which could handle complex search queries. Pre-specified search requirements included the ability to handle Boolean and proximity operators, and truncation searching. A total of seven Cochrane systematic reviews were randomly selected from the Cochrane Library Issue 2, 1998, and their bibliographic database search strategies were adapted for the web search engine, AltaVista. Each adaptation combined search terms for the intervention, problem, and study type in the systematic review. Hints to planned, ongoing, or unpublished studies retrieved by the search engine, which were not cited in the systematic reviews, were followed up by visiting websites and contacting authors for further details when required. The authors of the systematic reviews were then contacted and asked to comment on the potential relevance of the identified studies. Main Results – Hints to 14 unpublished and potentially relevant studies, corresponding to 4 of the 7 randomly selected Cochrane systematic reviews, were identified. Out of the 14 studies, 2 were considered irrelevant to the corresponding systematic review by the systematic review authors. The relevance of a further three studies could not be clearly ascertained. This left nine studies which were considered relevant to a systematic review. In addition to this main finding, the pilot study to identify suitable search engines found that AltaVista was the only search engine able to handle the complex searches required to search for unpublished studies. Conclusion –Web searches using a search engine have the potential to identify studies for systematic reviews. Web search engines have considerable limitations which impede the identification of studies.


Sign in / Sign up

Export Citation Format

Share Document