The self-Googling phenomenon: Investigating the performance of personalized information resources

Author(s):  
Thomas Nicolai ◽  
Lars Kirchhof ◽  
Axel Bruns ◽  
Jason Wilson ◽  
Barry Saunders

This paper investigates self-Googling through the monitoring of search engine activities of users and adds to the few quantitative studies on this topic already in existence. We explore this phenomenon by answering the following questions: To what extent is the self-Googling visible in the usage of search engines; is any significant difference measurable between queries related to self-Googling and generic search queries; to what extent do self-Googling search requests match the selected personalised Web pages? To address these questions we explore the theory of narcissism in order to help define self-Googling and present the results from a 14-month online experiment using Google search engine usage data.

2019 ◽  
Author(s):  
Jingchun Fan ◽  
Jean Craig ◽  
Na Zhao ◽  
Fujian Song

BACKGROUND Increasingly people seek health information from the Internet, in particular, health information on diseases that require intensive self-management, such as diabetes. However, the Internet is largely unregulated and the quality of online health information may not be credible. OBJECTIVE To assess the quality of online information on diabetes identified from the Internet. METHODS We used the single term “diabetes” or equivalent Chinese characters to search Google and Baidu respectively. The first 50 websites retrieved from each of the two search engines were screened for eligibility using pre-determined inclusion and exclusion criteria. Included websites were assessed on four domains: accessibility, content coverage, validity and readability. RESULTS We included 26 websites from Google search engine and 34 from Baidu search engine. There were significant differences in website provider (P<0.0001), but not in targeted population (P=0.832) and publication types (P=0.378), between the two search engines. The website accessibility was not statistically significantly different between the two search engines, although there were significant differences in items regarding website content coverage. There was no statistically significant difference in website validity between the Google and Baidu search engines (mean Discern score 3.3 vs 2.9, p=0.156). The results to appraise readability for English website showed that that Flesch Reading Ease scores ranged from 23.1 to 73.0 and the mean score of Flesch-Kincaid Grade Level ranged range from 5.7 to 19.6. CONCLUSIONS The content coverage of the health information for patients with diabetes in English search engine tended to be more comprehensive than that from Chinese search engine. There was a lack of websites provided by health organisations in China. The quality of online health information for people with diabetes needs to be improved to bridge the knowledge gap between website service and public demand.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mahdi Zeynali Tazehkandi ◽  
Mohsen Nowkarizi

PurposeThe purpose was to evaluate the effectiveness of Google (as an international search engine) as well as of Parsijoo, Rismoon, and Yooz (as Persian search engines).Design/methodology/approachIn this research, Google search engine as an international search engine, and three local ones, Parsijoo, Rismoon, and Yooz, were selected for evaluation. Likewise, 32 subject headings were selected from the Persian Subject Headings List, and then simulated work tasks were assigned based on them. A total of 192 students from Ferdowsi University of Mashhad were asked to search for the information needed for simulated work tasks in the selected search engines, and then to copy the relevant website URLs in the search form.FindingsThe findings indicated that Google, Parsijoo, Rismoon, and Yooz had a significant difference in the precision, recall, and normalized discounted cumulative gain. There was also a significant difference in the effectiveness (average of precision, recall, and NDCG) of these four search engines in the retrieval of the Persian resources.Practical implicationsUsers using an efficient search engine will attain more relevant documents, and Google search engine was more efficient in retrieving the Persian resources. It is recommended to use Google as it has a more efficient search.Originality/valueIn this research, for the first time, Google has been compared with local Persian search engines considering the new approach (simulated work tasks).


Author(s):  
Adan Ortiz-Cordova ◽  
Bernard J. Jansen

In this research study, the authors investigate the association between external searching, which is searching on a web search engine, and internal searching, which is searching on a website. They classify 295,571 external – internal searches where each search is composed of a search engine query that is submitted to a web search engine and then one or more subsequent queries submitted to a commercial website by the same user. The authors examine 891,453 queries from all searches, of which 295,571 were external search queries and 595,882 were internal search queries. They algorithmically classify all queries into states, and then clustered the searching episodes into major searching configurations and identify the most commonly occurring search patterns for both external, internal, and external-to-internal searching episodes. The research implications of this study are that external sessions and internal sessions must be considered as part of a continuous search episode and that online businesses can leverage external search information to more effectively target potential consumers.


Author(s):  
Oğuzhan Menemencioğlu ◽  
İlhami Muharrem Orak

Semantic web works on producing machine readable data and aims to deal with large amount of data. The most important tool to access the data which exist in web is the search engine. Traditional search engines are insufficient in the face of the amount of data that consists in the existing web pages. Semantic search engines are extensions to traditional engines and overcome the difficulties faced by them. This paper summarizes semantic web, concept of traditional and semantic search engines and infrastructure. Also semantic search approaches are detailed. A summary of the literature is provided by touching on the trends. In this respect, type of applications and the areas worked for are considered. Based on the data for two different years, trend on these points are analyzed and impacts of changes are discussed. It shows that evaluation on the semantic web continues and new applications and areas are also emerging. Multimedia retrieval is a newly scope of semantic. Hence, multimedia retrieval approaches are discussed. Text and multimedia retrieval is analyzed within semantic search.


2019 ◽  
Vol 16 (9) ◽  
pp. 3712-3716
Author(s):  
Kailash Kumar ◽  
Abdulaziz Al-Besher

This paper examines the overlapping of the results retrieved between three major search engines namely Google, Yahoo and Bing. A rigorous analysis of overlap among these search engines was conducted on 100 random queries. The overlap of first ten web page results, i.e., hundred results from each search engine and only non-sponsored results from these above major search engines were taken into consideration. Search engines have their own frequency of updates and ranking of results based on their relevance. Moreover, sponsored search advertisers are different for different search engines. Single search engine cannot index all Web pages. In this research paper, the overlapping analysis of the results were carried out between October 1, 2018 to October 31, 2018 among these major search engines namely, Google, Yahoo and Bing. A framework is built in Java to analyze the overlap among these search engines. This framework eliminates the common results and merges them in a unified list. It also uses the ranking algorithm to re-rank the search engine results and displays it back to the user.


2021 ◽  
Author(s):  
Srihari Vemuru ◽  
Eric John ◽  
Shrisha Rao

Humans can easily parse and find answers to complex queries such as "What was the capital of the country of the discoverer of the element which has atomic number 1?" by breaking them up into small pieces, querying these appropriately, and assembling a final answer. However, contemporary search engines lack such capability and fail to handle even slightly complex queries. Search engines process queries by identifying keywords and searching against them in knowledge bases or indexed web pages. The results are, therefore, dependent on the keywords and how well the search engine handles them. In our work, we propose a three-step approach called parsing, tree generation, and querying (PTGQ) for effective searching of larger and more expressive queries of potentially unbounded complexity. PTGQ parses a complex query and constructs a query tree where each node represents a simple query. It then processes the complex query by recursively querying a back-end search engine, going over the corresponding query tree in postorder. Using PTGQ makes sure that the search engine always handles a simpler query containing very few keywords. Results demonstrate that PTGQ can handle queries of much higher complexity than standalone search engines.


Author(s):  
Pavel Šimek ◽  
Jiří Vaněk ◽  
Jan Jarolímek

The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.


Author(s):  
Pratik C. Jambhale

Search engine optimization is a technique to take a web document in top search results of a search engine. Web presence Companies is not only an easy way to reach among the target users but it may be profitable for Business is exactly find the target users as of the reason that most of the time user search out with the keywords of their use rather than searching the Company name, and if the Company Website page come in the top positions then the page may be profitable. This work describes the tweaks of taking the page on top position in Google by increasing the Page rank which may result in the improved visibility and profitable deal for a Business. Google is most user-friendly search engine to prove for the all users which give user-oriented results. In addition ,most of other search engines use Google search patterns so we have concentrated on it. So, if a page is Register on Google it Is Display on most of the search engines.


Compiler ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 71
Author(s):  
Aris Wahyu Murdiyanto ◽  
Adri Priadana

Keyword research is one of the essential activities in Search Engine Optimization (SEO). One of the techniques in doing keyword research is to find out how many articles titles on a website indexed by the Google search engine contain a particular keyword or so-called "allintitle". Moreover, search engines are also able to provide keywords suggestion. Getting keywords suggestions and allintitle will not be effective, efficient, and economical if done manually for relatively extensive keyword research. It will take a long time to decide whether a keyword is needed to be optimized. Based on these problems, this study aimed to analyze the implementation of the web scraping technique to get relevant keyword suggestions from the Google search engine and the number of "allintitle" that are owned automatically. The data used as an experiment in this test consists of ten keywords, which each keyword would generate a maximum of ten keywords suggestion. Therefore, from ten keywords, it will produce at most 100 keywords suggestions and the number of allintitles. Based on the evaluation result, we got an accuracy of 100%. It indicated that the technique could be applied to get keywords suggestions and allintitle from Google search engines with outstanding accuracy values.


2019 ◽  
Author(s):  
Muhammad Ilham Verardi Pradana

Thanks to the existence of Search engines, all of informations and datas could be easily found in the internet, one of the search engine that users use the most is Google. Google still be the most popular search engine to provide any informations available on the internet. The search result that Google provide, doesn't always give the result we wanted. Google just displayed the results based on the keyword we type. So sometimes, they show us the negative contents on the internet, such as pornography, pornsites, and many more that seems to be related to the keyword, whether the title or the other that makes the result going that way. In this paper, we will implement the "DNS SEHAT" to pass along client's request queries so the Google search engine on the client's side will provide more relevant search results without any negative contents.


Sign in / Sign up

Export Citation Format

Share Document