scholarly journals Faculty Acceptance of the World Wide Web for Student Research

2001 ◽  
Vol 62 (3) ◽  
pp. 251-258 ◽  
Author(s):  
Susan Davis Herring

Although undergraduates frequently use the World Wide Web in their class assignments, little research has been done concerning how teaching faculty feel about their students’ use of the Web. This study explores faculty attitudes toward the Web as a research tool for their students’ research; their use of the Web in classroom instruction; and their policies concerning Web use by students. Results show that although faculty members generally feel positive about the Web as a research tool, they question the accuracy and reliability of Web content and are concerned about their students’ ability to evaluate the information found.

Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


Author(s):  
Olfa Nasraoui

The Web information age has brought a dramatic increase in the sheer amount of information (Web content), in the access to this information (Web usage), and in the intricate complexities governing the relationships within this information (Web structure). Hence, not surprisingly, information overload when searching and browsing the World Wide Web (WWW) has become the plague du jour. One of the most promising and potent remedies against this plague comes in the form of personalization. Personalization aims to customize the interactions on a Web site, depending on the user’s explicit and/or implicit interests and desires.


Author(s):  
G. Sreedhar

In the present day scenario the World Wide Web (WWW) is an important and popular information search tool. It provides convenient access to almost all kinds of information – from education to entertainment. The main objective of the chapter is to retrieve information from websites and then use the information for website quality analysis. In this chapter information of the website is retrieved through web mining process. Web mining is the process is the integration of three knowledge domains: Web Content Mining, Web Structure Mining and Web Usage Mining. Web content mining is the process of extracting knowledge from the content of web documents. Web structure mining is the process of inferring knowledge from the World Wide Web organization and links between references and referents in the Web. The web content elements are used to derive functionality and usability of the website. The Web Component elements are used to find the performance of the website. The website structural elements are used to find the complexity and usability of the website. The quality assurance techniques for web applications generally focus on the prevention of web failure or the reduction of chances for such failures. The web failures are defined as the inability to obtain or deliver information such as documents or computational results requested by web users. A high quality website is one that provides relevant, useful content and a good user experience. Thus in this chapter, all areas of website are thoroughly studied for analysing the quality of website design.


2020 ◽  
Vol 25 (2) ◽  
pp. 1-16
Author(s):  
Rasha Hany Salman ◽  
Mahmood Zaki ◽  
Nadia A. Shiltag

The web today has become an archive of information in any structure such content, sound, video, designs, and multimedia, with the progression of time overall web, the world wide web is now crowded with different data making extraction of virtual data burdensome process, web utilizes various information mining strategies to mine helpful information from page substance and web hyperlink. The fundamental employments of web content mining are to gather, sort out, classify, providing the best data accessible on the web for the client who needs to get it. The WCM tools are needful to examining some HTML reports, content and pictures at that point, the outcome is using by the web engine. This paper displays an overview of web mining categorization, web content technique and critical review and study of web content mining tools since (2011-2019) by building the table's a comparison of these instruments dependent on some important criteria


10.28945/2972 ◽  
2006 ◽  
Author(s):  
Martin Eayrs

The World Wide Web provides a wealth of information - indeed, perhaps more than can comfortably be processed. But how does all that Web content get there? And how can users assess the accuracy and authenticity of what they find? This paper will look at some of the problems of using the Internet as a resource and suggest criteria both for researching and for systematic and critical evaluation of what users find there.


Author(s):  
Anthony D. Andre

This paper provides an overview of the various human factors and ergonomics (HF/E) resources on the World Wide Web (WWW). A list of the most popular and useful HF/E sites will be provided, along with several critical guidelines relevant to using the WWW. The reader will gain a clear understanding of how to find HF/E information on the Web and how to successfully use the Web towards various HF/E professional consulting activities. Finally, we consider the ergonomic implications of surfing the Web.


2017 ◽  
Vol 4 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Deepika Punj ◽  
Ashutosh Dixit

In order to manage the vast information available on web, crawler plays a significant role. The working of crawler should be optimized to get maximum and unique information from the World Wide Web. In this paper, architecture of migrating crawler is proposed which is based on URL ordering, URL scheduling and document redundancy elimination mechanism. The proposed ordering technique is based on URL structure, which plays a crucial role in utilizing the web efficiently. Scheduling ensures that URLs should go to optimum agent for downloading. To ensure this, characteristics of both agents and URLs are taken into consideration for scheduling. Duplicate documents are also removed to make the database unique. To reduce matching time, document matching is made on the basis of their Meta information only. The agents of proposed migrating crawler work more efficiently than traditional single crawler by providing ordering and scheduling of URLs.


2021 ◽  
Author(s):  
Michael Dick

Since it was first formally proposed in 1990 (and since the first website was launched in 1991), the World Wide Web has evolved from a collection of linked hypertext documents residing on the Internet, to a "meta-medium" featuring platforms that older media have leveraged to reach their publics through alternative means. However, this pathway towards the modernization of the Web has not been entirely linear, nor will it proceed as such. Accordingly, this paper problematizes the notion of "progress" as it relates to the online realm by illuminating two distinct perspectives on the realized and proposed evolution of the Web, both of which can be grounded in the broader debate concerning technological determinism versus the social construction of technology: on the one hand, the centralized and ontology-driven shift from a human-centred "Web of Documents" to a machine-understandable "Web of Data" or "Semantic Web", which is supported by the Web's inventor, Tim Berners-Lee, and the organization he heads, the World Wide Web Consortium (W3C); on the other, the decentralized and folksonomy-driven mechanisms through which individuals and collectives exert control over the online environment (e.g. through the social networking applications that have come to characterize the contemporary period of "Web 2.0"). Methodologically, the above is accomplished through a sustained exploration of theory derived from communication and cultural studies, which discursively weaves these two viewpoints together with a technical history of recent W3C projects. As a case study, it is asserted that the forward slashes contained in a Uniform Resource Identifier (URI) were a social construct that was eventually rendered extraneous by the end-user community. By focusing On the context of the technology itself, it is anticipated that this paper will contribute to the broader debate concerning the future of the Web and its need to move beyond a determinant "modernization paradigm" or over-arching ontology, as well as advance the potential connections that can be cultivated with cognate disciplines.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Sign in / Sign up

Export Citation Format

Share Document