scholarly journals Virtual Reality on the World Wide Web: a Survey of Web Sites

1996 ◽  
Vol 2 (1) ◽  
pp. 1-20
Author(s):  
J. Fred Henderson

The author surveyed the World Wide Web using a number of Internet based search engines and VR resource pages to identify more than 11,300 open text sites dealing with virtual reality. This article identifies several hundred of the best devoted to VRML, VR news groups, VR resources, VR projects, VR software, VR hardware, academic and laboratories involved in VR, associations, publications, companies, and government agencies specializing in VR. The URLs are provided in the printed article. The CD-ROM that accompanies the printed journal also provides direct links to the sites when this article is viewed while simultaneously connected to the World Wide Web.

1998 ◽  
Vol 88 (5) ◽  
pp. 232-235 ◽  
Author(s):  
Z Leifer

This article introduces the podiatric physician interested in pediatrics to the resources available on the Internet. It surveys search engines, gateway sites on the World Wide Web leading to a wealth of pediatric information and services, and features such as electronic mail, news-groups, and Gopher sites. Examples illustrate how such resources can be helpful to the practicing podiatrist.


2003 ◽  
Vol 92 (3_suppl) ◽  
pp. 1091-1096 ◽  
Author(s):  
Nobuhiko Fujihara ◽  
Asako Miura

The influences of task type on search of the World Wide Web using search engines without limitation of search domain were investigated. 9 graduate and undergraduate students studying psychology (1 woman and 8 men, M age = 25.0 yr., SD = 2.1) participated. Their performance to manipulate the search engines on a closed task with only one answer were compared with their performance on an open task with several possible answers. Analysis showed that the number of actions was larger for the closed task ( M = 91) than for the open task ( M = 46.1). Behaviors such as selection of keywords (averages were 7.9% of all actions for the closed task and 16.7% for the open task) and pressing of the browser's back button (averages were 40.3% of all actions for the closed task and 29.6% for the open task) were also different. On the other hand, behaviors such as selection of hyperlinks, pressing of the home button, and number of browsed pages were similar for both tasks. Search behaviors were influenced by task type when the students searched for information without limitation placed on the information sources.


2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Author(s):  
Bill Karakostas ◽  
Yannis Zorgios

Chapter II presented the main concepts underlying business services. Ultimately, as this book proposes, business services need to be decomposed into networks of executable Web services. Web services are the primary software technology available today that closely matches the characteristics of business services. To understand the mapping from business to Web services, we need to understand the fundamental characteristics of the latter. This chapter therefore will introduce the main Web services concepts and standards. It does not intend to be a comprehensive description of all standards applicable to Web services, as many of them are still in a state of flux. It focuses instead on the more important and stable standards. All such standards are fully and precisely defined and maintained by the organizations that have defined and endorsed them, such as the World Wide Web Consortium (http://w3c. org), the OASIS organization (http://www.oasis-open.org) and others. We advise readers to visit periodically the Web sites describing the various standards to obtain the up to date versions.


Author(s):  
Rafael Cunha Cardoso ◽  
Fernando da Fonseca de Souza ◽  
Ana Carolina Salgado

Currently, systems dedicated to information retrieval/extraction perform an important role on fetching relevant and qualified information from the World Wide Web (WWW). The Semantic Web can be described as the Web’s future once it introduces a set of new concepts and tools. For instance, ontology is used to insert knowledge into contents of the current WWW to give meaning to such contents. This allows software agents to better understand the Web’s content meaning so that such agents can execute more complex and useful tasks to users. This work introduces an architecture that uses some Semantic Web concepts allied to Regular Expressions (REGEX) in order to develop a system that retrieves/extracts specific domain information from the Web. A prototype, based on such architecture, was developed to find information about offers announced on supermarkets Web sites.


2008 ◽  
pp. 3281-3295
Author(s):  
Larry P. Kvasny

Information and communication technologies (ICT) such as the World Wide Web, e-mail, and computers have become an integral part of America’s entertainment, communication, and information culture. Since the mid-1990s, ICT has become prevalent in middle- and upper-class American households. Companies and government agencies are increasingly offering products, services, and information online. Educational institutions are integrating ICT in their curriculum and are offering courses from a distance.


Author(s):  
Harrison Yang

Traditionally, a bibliography is regarded as a list of printed resources (books, articles, reports, etc.) on a given subject or topic for further study or reference purpose (Alred, Brusaw, & Oliu, 2006; Lamb, 2006). According to the Micropaedia (1990), the bibliography refers to “study and description of books.” It is either the listing of books according to some system (enumerative or descriptive bibliography) or the study of books as tangible objects (analytical or critical bibliography). The term webliography is commonly used when discussing online resources. Although there is no clear agreement among educators regarding the origin of this term, many tend to believe that the term webliography was coined by the libraries at Louisiana State University to describe their list of favorite Web sites. It is referred to as “Web bibliography.” Accordingly, a webliography is a list of resources that can be accessed on the World Wide Web, relating to a particular topic or can be referred to in a scholarly work. A variety of studies suggest that understanding and developing webliographies, which relate to locate, evaluate, organize, and use effectively the needed online resources, are essential for information literacy and technology integration.


Author(s):  
José Fernández-Cavia ◽  
Assumpció Huertas-Roig

City marketing tries to position cities in the mind of the public, although the process of creating and communicating city brands is still at an early stage of its development. One of the main tools for the communication of these brands is now the World Wide Web. This chapter describes the results of two combined studies (qualitative and quantitative) that analyzes a sample of official city Web sites. The results show that official Web sites of cities give much attention to ease of navigation, but interactivity is much less implemented, especially between users. Furthermore, some lack of attention to the communication aspects of city brands can also be found. Finally, the chapter submits a number of improvement proposals.


Sign in / Sign up

Export Citation Format

Share Document