scholarly journals Rearticulating E-dentities in the Web-based Classroom: One Technoresearcher’s Exploration of Power and the World Wide Web

2002 ◽  
Vol 19 (3) ◽  
pp. 331-346 ◽  
Author(s):  
Amy C. Kimme Hea
2009 ◽  
pp. 2389-2412
Author(s):  
Ying Liang

Web-based information systems (WBIS) aim to support e-business using IT, the World Wide Web, and the Internet. This chapter focuses on the Web site part of WBIS and argues why an easy-to-use and interactive Web site is critical to the success of WBIS. A dialogue act modeling approach is presented for capturing and specifying user needs for easy-to-use Web site of WBIS by WBIS analysis; for example, what users want to see on the computer screen and in which way they want to work with the system interactively. It calls such needs communicational requirements, in addition to functional and nonfunctional requirements, and builds a dialogue act model to specify them. The author hopes that development of the Web site of WBIS will be considered not only an issue in WBIS design but also an issue in WBIS analysis in WBIS development.


Author(s):  
Michael Lang

Although its conceptual origins can be traced back a few decades (Bush, 1945), it is only recently that hypermedia has become popularized, principally through its ubiquitous incarnation as the World Wide Web (WWW). In its earlier forms, the Web could only properly be regarded a primitive, constrained hypermedia implementation (Bieber & Vitali, 1997). Through the emergence in recent years of standards such as eXtensible Markup Language (XML), XLink, Document Object Model (DOM), Synchronized Multimedia Integration Language (SMIL) and WebDAV, as well as additional functionality provided by the Common Gateway Interface (CGI), Java, plug-ins and middleware applications, the Web is now moving closer to an idealized hypermedia environment. Of course, not all hypermedia systems are Web based, nor can all Web-based systems be classified as hypermedia (see Figure 1). See the terms and definitions at the end of this article for clarification of intended meanings. The focus here shall be on hypermedia systems that are delivered and used via the platform of the WWW; that is, Web-based hypermedia systems.


Author(s):  
Ying Liang

Web-based information systems (WBIS) aim to support e-business using IT, the World Wide Web, and the Internet. This chapter focuses on the Web site part of WBIS and argues why an easy-to-use and interactive Web site is critical to the success of WBIS. A dialogue act modeling approach is presented for capturing and specifying user needs for easy-to-use Web site of WBIS by WBIS analysis; for example, what users want to see on the computer screen and in which way they want to work with the system interactively. It calls such needs communicational requirements, in addition to functional and nonfunctional requirements, and builds a dialogue act model to specify them. The author hopes that development of the Web site of WBIS will be considered not only an issue in WBIS design but also an issue in WBIS analysis in WBIS development.


Author(s):  
Curtis J. Bonk ◽  
Jack A. Cummings ◽  
Norika Hara ◽  
Robert B. Fischler ◽  
Sun Myung Lee

Owston (1997, p. 27) pointed out that, “Nothing before has captured the imagination and interests of educators simultaneously around the globe more than the World Wide Web.” Other scholars claim that the Web is converging with other technologies to dramatically alter most conceptions of the teaching and learning process (Bonk & Cunningham, 1998; Duffy, Dueber, & Hawley, 1998; Harasim, Hiltz, Teles, & Turoff, 1995). From every corner of one’s instruction there lurk pedagogical opportunities—new resources, partners, courses, and markets—to employ the World Wide Web as an instructional device. Nevertheless, teaching on the Web is not a simple decision since most instructors typically lack vital information about the effects of various Web tools and approaches on student learning. Of course, the dearth of such information negatively impacts the extent faculty are willing to embed Web-based learning components in their classes. What Web-related decisions do college instructors face? Dozens. Hundreds. Perhaps thousands! There are decisions about the class size, forms of assessments, amount and type of feedback, location of students, and the particular Web courseware system used. Whereas some instructors will want to start using the Web with minor adaptations to their teaching, others will feel comfortable taking extensive risks in building entire courses or programs on the Web. Where you fall in terms of your comfort level as an instructor or student will likely shift in the next few years as Web courseware stabilizes and is more widely accepted in teaching. Of course, significant changes in the Web-based instruction will require advancements in both pedagogy and technology (Bonk & Dennen, 1999). Detailed below is a ten level Web integration continuum of the pedagogical choices faculty must consider in developing Web-based course components.


Author(s):  
Anthony D. Andre

This paper provides an overview of the various human factors and ergonomics (HF/E) resources on the World Wide Web (WWW). A list of the most popular and useful HF/E sites will be provided, along with several critical guidelines relevant to using the WWW. The reader will gain a clear understanding of how to find HF/E information on the Web and how to successfully use the Web towards various HF/E professional consulting activities. Finally, we consider the ergonomic implications of surfing the Web.


2017 ◽  
Vol 4 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Deepika Punj ◽  
Ashutosh Dixit

In order to manage the vast information available on web, crawler plays a significant role. The working of crawler should be optimized to get maximum and unique information from the World Wide Web. In this paper, architecture of migrating crawler is proposed which is based on URL ordering, URL scheduling and document redundancy elimination mechanism. The proposed ordering technique is based on URL structure, which plays a crucial role in utilizing the web efficiently. Scheduling ensures that URLs should go to optimum agent for downloading. To ensure this, characteristics of both agents and URLs are taken into consideration for scheduling. Duplicate documents are also removed to make the database unique. To reduce matching time, document matching is made on the basis of their Meta information only. The agents of proposed migrating crawler work more efficiently than traditional single crawler by providing ordering and scheduling of URLs.


2021 ◽  
Author(s):  
Michael Dick

Since it was first formally proposed in 1990 (and since the first website was launched in 1991), the World Wide Web has evolved from a collection of linked hypertext documents residing on the Internet, to a "meta-medium" featuring platforms that older media have leveraged to reach their publics through alternative means. However, this pathway towards the modernization of the Web has not been entirely linear, nor will it proceed as such. Accordingly, this paper problematizes the notion of "progress" as it relates to the online realm by illuminating two distinct perspectives on the realized and proposed evolution of the Web, both of which can be grounded in the broader debate concerning technological determinism versus the social construction of technology: on the one hand, the centralized and ontology-driven shift from a human-centred "Web of Documents" to a machine-understandable "Web of Data" or "Semantic Web", which is supported by the Web's inventor, Tim Berners-Lee, and the organization he heads, the World Wide Web Consortium (W3C); on the other, the decentralized and folksonomy-driven mechanisms through which individuals and collectives exert control over the online environment (e.g. through the social networking applications that have come to characterize the contemporary period of "Web 2.0"). Methodologically, the above is accomplished through a sustained exploration of theory derived from communication and cultural studies, which discursively weaves these two viewpoints together with a technical history of recent W3C projects. As a case study, it is asserted that the forward slashes contained in a Uniform Resource Identifier (URI) were a social construct that was eventually rendered extraneous by the end-user community. By focusing On the context of the technology itself, it is anticipated that this paper will contribute to the broader debate concerning the future of the Web and its need to move beyond a determinant "modernization paradigm" or over-arching ontology, as well as advance the potential connections that can be cultivated with cognate disciplines.


Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


Author(s):  
Sathiyamoorthi V.

It is generally observed throughout the world that in the last two decades, while the average speed of computers has almost doubled in a span of around eighteen months, the average speed of the network has doubled merely in a span of just eight months! In order to improve the performance, more and more researchers are focusing their research in the field of computers and its related technologies. Internet is one such technology that plays a major role in simplifying the information sharing and retrieval. World Wide Web (WWW) is one such service provided by the Internet. It acts as a medium for sharing of information. As a result, millions of applications run on the Internet and cause increased network traffic and put a great demand on the available network infrastructure.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Sign in / Sign up

Export Citation Format

Share Document