scholarly journals Search Engines and Critical Thinking, Evaluation and Investigation

2018 ◽  
Vol 5 (2) ◽  
pp. 107
Author(s):  
Michael F. Shaughnessy ◽  
Mark Viner ◽  
Cynthia Kleyn Kennedy

<p><em>Increasingly, teachers are assigning work involving internet searches and students are increasingly employing the Internet to procure at least preliminary information. However, the vast majority of teachers fail to teach critical analysis of material posted on the Web. This paper will explore this issue and examine the need for instruction in critical thinking regarding sources on the Internet.</em><em></em></p>

Author(s):  
Daniele Besomi

This paper surveys the economic dictionaries available on the internet, both for free and on subscription, addressed to various kinds of audiences from schoolchildren to research students and academics. The focus is not much on content, but on whether and how the possibilities opened by electronic editing and by the modes of distribution and interaction opened by the internet are exploited in the organization and presentation of the materials. The upshot is that although a number of web dictionaries have taken advantage of some of the innovations offered by the internet (in particular the possibility of regularly updating, of turning cross-references into hyperlinks, of adding links to external materials, of adding more or less complex search engines), the observation that internet lexicography has mostly produced more ef! cient dictionary without, however, fundamentally altering the traditional paper structure can be con! rmed for this particular subset of reference works. In particular, what is scarcely explored is the possibility of visualizing the relationship between entries, thus abandoning the project of the early encyclopedists right when the technology provides the means of accomplishing it.


Author(s):  
Suely Fragoso

This chapter proposes that search engines apply a verticalizing pressure on the WWW many-to-many information distribution model, forcing this to revert to a distributive model similar to that of the mass media. The argument for this starts with a critical descriptive examination of the history of search mechanisms for the Internet. Parallel to this there is a discussion of the increasing ties between the search engines and the advertising market. The chapter then presents questions concerning the concentration of traffic on the Web around a small number of search engines which are in the hands of an equally limited number of enterprises. This reality is accentuated by the confidence that users place in the search engine and by the ongoing acquisition of collaborative systems and smaller players by the large search engines. This scenario demonstrates the verticalizing pressure that the search engines apply to the majority of WWW users, that bring it back toward the mass distribution mode.


2018 ◽  
pp. 742-748
Author(s):  
Viveka Vardhan Jumpala

The Internet, which is an information super high way, has practically compressed the world into a cyber colony through various networks and other Internets. The development of the Internet and the emergence of the World Wide Web (WWW) as common vehicle for communication and instantaneous access to search engines and databases. Search Engine is designed to facilitate search for information on the WWW. Search Engines are essentially the tools that help in finding required information on the web quickly in an organized manner. Different search engines do the same job in different ways thus giving different results for the same query. Search Strategies are the new trend on the Web.


2016 ◽  
Vol 2 ◽  
pp. 205521731665241 ◽  
Author(s):  
Antoine Guéguen ◽  
Elisabeth Maillart ◽  
Thibault Gallice ◽  
Bashar Allaf

Background Information available on the internet has changed patient–neurologist relationships. Its evaluation for multiple sclerosis is only partial, regardless of the language used. Objective We aim to evaluate the content quality and ranking indexes of French-language sites dealing with multiple sclerosis. Methods Two French terms and three search engines were used to identify the sites whose ranking indexes were calculated according to their positions on each page designated by the search engines. Three evaluators used the DISCERN questionnaire to assess the content quality of the 25 selected sites. The sites were classified according to the mean of the evaluators’ grades. Grading agreement between evaluators was calculated. Ranking indexes were computed as a rank/100. Results Content level was deemed mediocre, with poor referencing of the information provided. The naïve and two expert evaluators’ grades differed. Content quality disparity was found within the different website categories, except for institutional sites. No correlation was found between content quality and ranking index. Conclusion The information available was heterogeneous. Physicians should guide patients in their internet searches for information so that they can benefit from good-quality input which is potentially able to improve their management.


A web crawler is also called spider. For the intention of web indexing it automatically searches on the WWW. As the W3 is increasing day by day, globally the number of web pages grown massively. To make the search sociable for users, searching engine are mandatory. So to discover the particular data from the WWW search engines are operated. It would be almost challenging for mankind devoid of search engines to find anything from the web unless and until he identifies a particular URL address. A central depository of HTML documents in indexed form is sustained by every search Engine. Every time an operator gives the inquiry, searching is done at the database of indexed web pages. The size of a database of every search engine depends on the existing page on the internet. So to increase the proficiency of search engines, it is permitted to store only the most relevant and significant pages in the database.


Author(s):  
K. Selvakuberan ◽  
M. Indra Devi ◽  
R. Rajaram

The explosive growth of the Web makes it a very useful information resource to all types of users. Today, everyone accesses the Internet for various purposes and retrieving the required information within the stipulated time is the major demand from users. Also, the Internet provides millions of Web pages for each and every search term. Getting interesting and required results from the Web becomes very difficult and turning the classification of Web pages into relevant categories is the current research topic. Web page classification is the current research problem that focuses on classifying the documents into different categories, which are used by search engines for producing the result. In this chapter we focus on different machine learning techniques and how Web pages can be classified using these machine learning techniques. The automatic classification of Web pages using machine learning techniques is the most efficient way used by search engines to provide accurate results to the users. Machine learning classifiers may also be trained to preserve the personal details from unauthenticated users and for privacy preserving data mining.


2012 ◽  
pp. 50-65 ◽  
Author(s):  
K. Selvakuberan ◽  
M. Indra Devi ◽  
R. Rajaram

The explosive growth of the Web makes it a very useful information resource to all types of users. Today, everyone accesses the Internet for various purposes and retrieving the required information within the stipulated time is the major demand from users. Also, the Internet provides millions of Web pages for each and every search term. Getting interesting and required results from the Web becomes very difficult and turning the classification of Web pages into relevant categories is the current research topic. Web page classification is the current research problem that focuses on classifying the documents into different categories, which are used by search engines for producing the result. In this chapter we focus on different machine learning techniques and how Web pages can be classified using these machine learning techniques. The automatic classification of Web pages using machine learning techniques is the most efficient way used by search engines to provide accurate results to the users. Machine learning classifiers may also be trained to preserve the personal details from unauthenticated users and for privacy preserving data mining.


Author(s):  
Ronak Hamzehei ◽  
Masoumeh Ansari ◽  
Shahabedin Rahmatizadeh ◽  
Saeideh Valizadeh-Haghi

Objectives: Health service providers use internet as a tool for the spreading of health information and people often go on the web to acquire information about a disease. A wide range of information with varying qualities and by authors with varying degrees of credibility has thus become accessible by the public. Most people believe that the health information available on the internet is reliable. This issue reveals the need for having a critical view of the health information available online that is directly related to people's life. The Ebola epidemic is an emergency situation in the international public health domain and the internet is regarded as an important source for obtaining information on this disease. Given the absence of studies on the trustworthiness of health websites on Ebola, the present study was conducted to assess the trustworthiness of websites which are focused on this disease.Methods: The term "Ebola" was searched in Google, Yahoo and Bing search engines. Google Chrome browser was used for this purpose with the settings fixed on yielding 10 results per page. The first 30 English language websites in each of the three search engines were evaluated manually by using the HONcode of conduct tool. Moreover, the official HONcode toolbar was used to identify websites that had been officially certified by HON foundation. Results: Almost the half of the retrieved websites were commercial (49%). Complementarity was the least-observed criterion (37%) in all the websites retrieved from all three-search engines. Justifiability, Transparency and Financial Disclosure had been completely observed (100%).Discussion: The present study showed that only three criteria (Justifiability, Transparency and Financial Disclosure) out of the eight HON criteria were observed in the examined websites. Like other health websites, the websites concerned with Ebola are not reliable and should be used with caution.Conclusion: Considering the lack of a specific policy about the publication of health information on the web, it is necessary for healthcare providers to advise their patients to use only credible websites. Furthermore, teaching them the criteria for assessing the trustworthiness of health websites would be helpful.


2021 ◽  
Vol 15 (3) ◽  
pp. 205-215
Author(s):  
Gurjot Singh Mahi ◽  
Amandeep Verma

  Web crawlers are as old as the Internet and are most commonly used by search engines to visit websites and index them into repositories. They are not limited to search engines but are also widely utilized to build corpora in different domains and languages. This study developed a focused set of web crawlers for three Punjabi news websites. The web crawlers were developed to extract quality text articles and add them to a local repository to be used in further research. The crawlers were implemented using the Python programming language and were utilized to construct a corpus of more than 134,000 news articles in nine different news genres. The crawler code and extracted corpora were made publicly available to the scientific community for research purposes.


Information Retrieval has become the buzzword in the today’s era of advanced computing. The tremendous amount of information is available over the Internet in the form of documents which can either be structured or unstructured. It is really difficult to retrieve relevant information from such large pool. The traditional search engines based on keyword search are unable to give the desired relevant results as they search the web on the basis of the keywords present in the query fired. On contrary the ontology based semantic search engines provide relevant and quick results to the user as the information stored in the semantic web is more meaningful. The paper gives the comparative study of the ontology based search engines with those which are keyword based. Few of both types have been taken and same queries are run on each one of them to analyze the results to compare the precision of the results provided by them by classifying the results as relevant or non-relevant.


Sign in / Sign up

Export Citation Format

Share Document