scholarly journals Wikis as Tools for Collaboration

Author(s):  
Jane Klobas

Tim Berners-Lee, the inventor of the World Wide Web, envisioned it as a place where “people can communicate … by sharing their knowledge in a pool … putting their ideas in, as well as taking them out” (Berners- Lee, 1999). For much of its first decade, the Web was, however, primarily a place where the majority of people took ideas out rather than putting them in. This has changed. Many “social software” services now exist on the Web to facilitate social interaction, collaboration and information exchange. This article introduces wikis, jointly edited Web sites and Intranet resources that are accessed through web browsers. After a brief overview of wiki history, we explain wiki technology and philosophy, provide an overview of how wikis are being used for collaboration, and consider some of the issues associated with management of wikis before considering the future of wikis.

2000 ◽  
Vol 6 (1_suppl) ◽  
pp. 110-112 ◽  
Author(s):  
Nancy A Brown

The Telemedicine Information Exchange (TIE) has provided comprehensive telemedicine information on the World Wide Web (Web) since early 1995. It received major funding from the National Library of Medicine in 1997. Among other things, the TIE contains six major databases: literature citations; active telemedicine programmes; a ‘what's new in telemedicine’ column; funding opportunities; forthcoming conferences; and a list of vendors of telemedicine equipment and services. Recent additions include a document delivery service, inaugurated in early 1999. More than 1000 other Web sites link to the TIE, and we have 5000 visitors per month from several countries. Given its relative longevity on the Web, TIE researchers have been in a unique position to observe trends in telemedicine.


2009 ◽  
pp. 1283-1290
Author(s):  
Jane Klobas

Tim Berners-Lee, the inventor of the World Wide Web, envisioned it as a place where “people can communicate … by sharing their knowledge in a pool … putting their ideas in, as well as taking them out” (Berners-Lee, 1999). For much of its first decade, the Web was, however, primarily a place where the majority of people took ideas out rather than putting them in. This has changed. Many “social software” services now exist on the Web to facilitate social interaction, collaboration and information exchange. This article introduces wikis, jointly edited Web sites and Intranet resources that are accessed through web browsers. After a brief overview of wiki history, we explain wiki technology and philosophy, provide an overview of how wikis are being used for collaboration, and consider some of the issues associated with management of wikis before considering the future of wikis. In 1995, an American computer programmer, Ward Cunningham, developed some software to help colleagues quickly and easily share computer programming patterns across the Web. He called the software WikiWikiWeb, after the “Wiki Wiki” shuttle bus service at Honolulu International Airport (Cunningham, 2003). As interest in wikis increased, other programmers developed wiki software, most of it (like WikiWikiWeb) open source. Although wiki software was relatively simple by industry standards, some technical knowledge was required to install, maintain and extend the “wiki engines.” Contributors needed to learn and use a markup language to edit pages, and even if the markup languages were often simpler than HTML, non-technical users did not find these early wikis compelling. In the early years of the twenty-first century, a number of developments led to more widespread use of wikis. Wiki technology became simpler to install and use, open source software was improved, and commercial enterprise-grade wiki software was released. The not insignificant issues associated with attracting and managing a community of people who use a wiki to share their knowledge were discussed in forums such as MeatballWiki (http://www.usemod.com/cgibin/ mb.pl?action=browse&id=MeatballWiki&ol did=FrontPage). The public’s attention was drawn to wikis following the launch, in January 2001, of the publicly written Web-based encyclopedia, Wikipedia (www.wikipedia.org). And wiki hosting services and application service providers (ASPs) were established to enable individuals and organizations to develop wikis without the need to install and maintain wiki software themselves. By July 2006, nearly 3,000 wikis were indexed at the wiki indexing site www.wikiindex. org, popular wiki hosting services such as Wikia (www.wikia.org) and seedwiki (www.seedwiki. org) hosted thousands of wikis between them, and Wikipedia had more than four and a half million pages in over 100 languages. Moreover, wikis were increasingly being used in less public ways, to support and enable collaboration in institutions ranging from businesses to the public service and not-for-profit organizations.


Author(s):  
Punam Bedi ◽  
Neha Gupta ◽  
Vinita Jindal

The World Wide Web is a part of the Internet that provides data dissemination facility to people. The contents of the Web are crawled and indexed by search engines so that they can be retrieved, ranked, and displayed as a result of users' search queries. These contents that can be easily retrieved using Web browsers and search engines comprise the Surface Web. All information that cannot be crawled by search engines' crawlers falls under Deep Web. Deep Web content never appears in the results displayed by search engines. Though this part of the Web remains hidden, it can be reached using targeted search over normal Web browsers. Unlike Deep Web, there exists a portion of the World Wide Web that cannot be accessed without special software. This is known as the Dark Web. This chapter describes how the Dark Web differs from the Deep Web and elaborates on the commonly used software to enter the Dark Web. It highlights the illegitimate and legitimate sides of the Dark Web and specifies the role played by cryptocurrencies in the expansion of Dark Web's user base.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Author(s):  
Bill Karakostas ◽  
Yannis Zorgios

Chapter II presented the main concepts underlying business services. Ultimately, as this book proposes, business services need to be decomposed into networks of executable Web services. Web services are the primary software technology available today that closely matches the characteristics of business services. To understand the mapping from business to Web services, we need to understand the fundamental characteristics of the latter. This chapter therefore will introduce the main Web services concepts and standards. It does not intend to be a comprehensive description of all standards applicable to Web services, as many of them are still in a state of flux. It focuses instead on the more important and stable standards. All such standards are fully and precisely defined and maintained by the organizations that have defined and endorsed them, such as the World Wide Web Consortium (http://w3c. org), the OASIS organization (http://www.oasis-open.org) and others. We advise readers to visit periodically the Web sites describing the various standards to obtain the up to date versions.


Author(s):  
Rafael Cunha Cardoso ◽  
Fernando da Fonseca de Souza ◽  
Ana Carolina Salgado

Currently, systems dedicated to information retrieval/extraction perform an important role on fetching relevant and qualified information from the World Wide Web (WWW). The Semantic Web can be described as the Web’s future once it introduces a set of new concepts and tools. For instance, ontology is used to insert knowledge into contents of the current WWW to give meaning to such contents. This allows software agents to better understand the Web’s content meaning so that such agents can execute more complex and useful tasks to users. This work introduces an architecture that uses some Semantic Web concepts allied to Regular Expressions (REGEX) in order to develop a system that retrieves/extracts specific domain information from the Web. A prototype, based on such architecture, was developed to find information about offers announced on supermarkets Web sites.


2020 ◽  
pp. 143-158
Author(s):  
Chris Bleakley

Chapter 8 explores the arrival of the World Wide Web, Amazon, and Google. The web allows users to display “pages” of information retrieved from remote computers by means of the Internet. Inventor Tim Berners-Lee released the first web software for free, setting in motion an explosion in Internet usage. Seeing the opportunity of a lifetime, Jeff Bezos set-up Amazon as an online bookstore. Amazon’s success was accelerated by a product recommender algorithm that selectively targets advertising at users. By the mid-1990s there were so many web sites that users often couldn’t find what they were looking for. Stanford PhD student Larry Page invented an algorithm for ranking search results based on the importance and relevance of web pages. Page and fellow student, Sergey Brin, established a company to bring their search algorithm to the world. Page and Brin - the founders of Google - are now worth US$35-40 billion, each.


1997 ◽  
Vol 3 (5) ◽  
pp. 276-280
Author(s):  
Nicholas P. Poolos

There has been an explosion in the number of World Wide Web sites on the Internet dedicated to neuroscience. With a little direction, it is possible to navigate around the Web and find databases containing information indispensable to both basic and clinical neuroscientists. This article reviews some Web sites of particular interest. NEUROSCIENTIST 3:276–280, 1997


Author(s):  
Liam R. E. Quin

As custodians of the World Wide Web, the Web Consortium (W3C) is both a leader and a follower. We follow because you can't standardise a process or technology until it is in use. We lead, because we guide the new technologies from technical, business, and social perspectives. The Web has already changed publishing, and we are at the brink of even bigger changes. What happens when Web technologies are good enough to replace existing authoring tools? What happens when the Web includes SVG and MathML and can support typography powerful enough to produce printed books? What happens when electronic books and Web sites converge? We're not quite there yet, but W3C is working in this area, working with commercial publishers, with IPDF and other organizations, listening to industry experts and tool-makers, and gently nudging the Web forward all over the world. The difficulty facing publishers today is how to manage when the Web isn't quite ready. The right question to ask is, how do we make the Web ready? In this session, Liam Quin from the W3C will describe what W3C is doing in its new Publishing Activity, how it will affect you, and how you can get involved.


2001 ◽  
Vol 20 (4) ◽  
pp. 11-18 ◽  
Author(s):  
Cleborne D. Maddux

The Internet and the World Wide Web are growing at unprecedented rates. More and more teachers are authoring school or classroom web pages. Such pages have particular potential for use in rural areas by special educators, children with special needs, and the parents of children with special needs. The quality of many of these pages leaves much to be desired. All web pages, especially those authored by special educators should be accessible for people with disabilities. Many other problems complicate use of the web for all users, whether or not they have disabilities. By taking some simple steps, beginning webmasters can avoid these problems. This article discusses practical solutions to common accessibility problems and other problems seen commonly on the web.


Sign in / Sign up

Export Citation Format

Share Document