scholarly journals What Do You Need to Know to Use a Search Engine? Why We Still Need to Teach Research Skills

AI Magazine ◽  
2015 ◽  
Vol 36 (4) ◽  
pp. 61-70 ◽  
Author(s):  
Daniel M. Russell

For the vast majority of queries (for example, navigation, simple fact lookup, and others), search engines do extremely well. Their ability to quickly provide answers to queries is a remarkable testament to the power of many of the fundamental methods of AI. They also highlight many of the issues that are common to sophisticated AI question-answering systems. It has become clear that people think of search programs in ways that are very different from traditional information sources. Rapid and ready-at-hand access, depth of processing, and the way they enable people to offload some ordinary memory tasks suggest that search engines have become more of a cognitive amplifier than a simple repository or front-end to the Internet. Like all sophisticated tools, people still need to learn how to use them. Although search engines are superb at finding and presenting information—up to and including extracting complex relations and making simple inferences—knowing how to frame questions and evaluate their results for accuracy and credibility remains an ongoing challenge. Some questions are still deep and complex, and still require knowledge on the part of the search user to work through to a successful answer. And the fact that the underlying information content, user interfaces, and capabilities are all in a continual state of change means that searchers need to continually update their knowledge of what these programs can (and cannot) do.

Author(s):  
Novario Jaya Perdana

The accuracy of search result using search engine depends on the keywords that are used. Lack of the information provided on the keywords can lead to reduced accuracy of the search result. This means searching information on the internet is a hard work. In this research, a software has been built to create document keywords sequences. The software uses Google Latent Semantic Distance which can extract relevant information from the document. The information is expressed in the form of specific words sequences which could be used as keyword recommendations in search engines. The result shows that the implementation of the method for creating document keyword recommendation achieved high accuracy and could finds the most relevant information in the top search results.


Author(s):  
Cecil Eng Huang Chua ◽  
Roger H. Chiang ◽  
Veda C. Storey

Search engines are ubiquitous tools for seeking information from the Internet and, as such, have become an integral part of our information society. New search engines that combine ideas from separate search engines generally outperform the search engines from which they took ideas. Designers, however, may not be aware of the work of other search engine developers or such work may not be available in modules that can be incorporated into another search engine. This research presents an interoperability architecture for building customized search engines. Existing search engines are analyzed and decomposed into self-contained components that are classified into six categories. A prototype, called the Automated Software Development Environment for Information Retrieval, was developed to implement the interoperability architecture, and an assessment of its feasibility was carried out. The prototype resolves conflicts between components of separate search engines and demonstrates how design features across search engines can be integrated.


Author(s):  
Suely Fragoso

This chapter proposes that search engines apply a verticalizing pressure on the WWW many-to-many information distribution model, forcing this to revert to a distributive model similar to that of the mass media. The argument for this starts with a critical descriptive examination of the history of search mechanisms for the Internet. Parallel to this there is a discussion of the increasing ties between the search engines and the advertising market. The chapter then presents questions concerning the concentration of traffic on the Web around a small number of search engines which are in the hands of an equally limited number of enterprises. This reality is accentuated by the confidence that users place in the search engine and by the ongoing acquisition of collaborative systems and smaller players by the large search engines. This scenario demonstrates the verticalizing pressure that the search engines apply to the majority of WWW users, that bring it back toward the mass distribution mode.


2011 ◽  
Vol 1 (4) ◽  
pp. 64-74
Author(s):  
Anastasios A. Economides ◽  
Antonia Kontaratou

Web 2.0 applications have been increasingly recognized as important information sources for consumers, including the domain of tourism. In the center of the travelers’ interest is the use of these applications in order to compare and choose hotels for their accommodation at various tourism destinations. It is important to investigate the issues related to the presence of the hotels on some of the most dominant tourism search engines and to the prices that they present. This paper compares the search engines and determines whether the cheapest and to the most complete one can be discovered. This paper focuses on analyzing the hotel prices presented on their official websites and on the following eight tourism search engines: Booking.com, Expedia.com, Hotelclub.com, Hotels.com, Orbitz.com, Priceline.com, Travelocity.com, and Venere.com. The data analysis, by the use of the descriptive statistics, showed that only 23% of the hotels examined are found at all the search engines. Furthermore, the price analysis showed that there are differences among the search engines. Although some search engines statistically give lower prices, there is not a single search engine that always gives the lowest price for every hotel.


Author(s):  
Antonius Antonius ◽  
Bernard Renaldy Suteja

Current development of the internet world has been growing rapidly, especially in the field of website. People use search engines to find the news or information they needed on a website. One of the many indications of the success of a website is traffic. Traffic could be received from various factors, one of which is website rank in Search Engine Result Page (SERP). To improve the SERP, SEO methods are required. This research will implement SEO to website especially on the image, and then analyzed by using a tester tools, for example SEOptimer, Pingdom Tools, and SEO Site Checkup. After the website has been optimized, tested with the same tester tools. From the research results can be seen whether image optimization can affect SERP.


2019 ◽  
Author(s):  
Muhammad Ilham Verardi Pradana

Thanks to the existence of Search engines, all of informations and datas could be easily found in the internet, one of the search engine that users use the most is Google. Google still be the most popular search engine to provide any informations available on the internet. The search result that Google provide, doesn't always give the result we wanted. Google just displayed the results based on the keyword we type. So sometimes, they show us the negative contents on the internet, such as pornography, pornsites, and many more that seems to be related to the keyword, whether the title or the other that makes the result going that way. In this paper, we will implement the "DNS SEHAT" to pass along client's request queries so the Google search engine on the client's side will provide more relevant search results without any negative contents.


2018 ◽  
Author(s):  
James Grimmelmann

98 Minnesota Law Review 868 (2014)Academic and regulatory debates about Google are dominated by two opposing theories of what search engines are and how law should treat them. Some describe search engines as passive, neutral conduits for websites’ speech; others describe them as active, opinionated editors: speakers in their own right. The conduit and editor theories give dramatically different policy prescriptions in areas ranging from antitrust to copyright. But they both systematically discount search users’ agency, regarding users merely as passive audiences.A better theory is that search engines are not primarily conduits or editors, but advisors. They help users achieve their diverse and individualized information goals by sorting through the unimaginable scale and chaos of the Internet. Search users are active listeners, affirmatively seeking out the speech they wish to receive. Search engine law can help them by ensuring two things: access to high-quality search engines, and loyalty from those search engines.The advisor theory yields fresh insights into long-running disputes about Google. It suggests, for example, a new approach to deciding when Google should be liable for giving a website the “wrong” ranking. Users’ goals are too subjective for there to be an absolute standard of correct and incorrect rankings; different search engines necessarily assess relevance differently. But users are also entitled to complain when a search engine deliberately misleads them about its own relevance assessments. The result is a sensible, workable compromise between the conduit and editor theories.


2019 ◽  
Author(s):  
Jingchun Fan ◽  
Jean Craig ◽  
Na Zhao ◽  
Fujian Song

BACKGROUND Increasingly people seek health information from the Internet, in particular, health information on diseases that require intensive self-management, such as diabetes. However, the Internet is largely unregulated and the quality of online health information may not be credible. OBJECTIVE To assess the quality of online information on diabetes identified from the Internet. METHODS We used the single term “diabetes” or equivalent Chinese characters to search Google and Baidu respectively. The first 50 websites retrieved from each of the two search engines were screened for eligibility using pre-determined inclusion and exclusion criteria. Included websites were assessed on four domains: accessibility, content coverage, validity and readability. RESULTS We included 26 websites from Google search engine and 34 from Baidu search engine. There were significant differences in website provider (P<0.0001), but not in targeted population (P=0.832) and publication types (P=0.378), between the two search engines. The website accessibility was not statistically significantly different between the two search engines, although there were significant differences in items regarding website content coverage. There was no statistically significant difference in website validity between the Google and Baidu search engines (mean Discern score 3.3 vs 2.9, p=0.156). The results to appraise readability for English website showed that that Flesch Reading Ease scores ranged from 23.1 to 73.0 and the mean score of Flesch-Kincaid Grade Level ranged range from 5.7 to 19.6. CONCLUSIONS The content coverage of the health information for patients with diabetes in English search engine tended to be more comprehensive than that from Chinese search engine. There was a lack of websites provided by health organisations in China. The quality of online health information for people with diabetes needs to be improved to bridge the knowledge gap between website service and public demand.


2018 ◽  
pp. 742-748
Author(s):  
Viveka Vardhan Jumpala

The Internet, which is an information super high way, has practically compressed the world into a cyber colony through various networks and other Internets. The development of the Internet and the emergence of the World Wide Web (WWW) as common vehicle for communication and instantaneous access to search engines and databases. Search Engine is designed to facilitate search for information on the WWW. Search Engines are essentially the tools that help in finding required information on the web quickly in an organized manner. Different search engines do the same job in different ways thus giving different results for the same query. Search Strategies are the new trend on the Web.


Sign in / Sign up

Export Citation Format

Share Document