scholarly journals An Evaluation of the Source and Content of Kienböck’s Disease Information on the Internet

Author(s):  
Brian M. Katt ◽  
Ludovico Lucenti ◽  
Nailah F. Mubin ◽  
Michael Nakashian ◽  
Daniel Fletcher ◽  
...  

Abstract Introduction The use of the internet for health-related information continues to increase. Because of its decentralized structure, information contained within the World Wide Web is not regulated. The purpose of the present study is to evaluate the type and quality of information on the internet regarding Kienböck’s disease. We hypothesized that the information available on the World Wide Web would be of good informational value. Materials and Methods The search phrase “Kienböck’s disease” was entered into the five most commonly used internet search engines. The top 49 nonsponsored Web sites identified by each search engine were collected. Each unique Web site was evaluated for authorship and content, and an informational score ranging from 0 to 100 points was assigned. Each site was reviewed by two fellowship-trained hand surgeons. Results The informational mean score for the sites was 45.5 out of a maximum of 100 points. Thirty-one (63%) of the Web sites evaluated were authored by an academic institution or a physician. Twelve (24%) of the sites were commercial sites or sold commercial products. The remaining 6 Web sites (12%) were noninformational, provided unconventional information, or had lay authorship. The average informational score on the academic or physician authored Web sites was 54 out of 100 points, compared with 38 out of 100 for the remainder of the sites. This difference was statistically significant. Conclusion While the majority of the Web sites evaluated were authored by academic institutions or physicians, the informational value contained within is of limited completeness. More than one quarter of the Web sites were commercial in nature. There remains significant room for improvement in the completeness of information available for common hand conditions in the internet.

1997 ◽  
Vol 3 (5) ◽  
pp. 276-280
Author(s):  
Nicholas P. Poolos

There has been an explosion in the number of World Wide Web sites on the Internet dedicated to neuroscience. With a little direction, it is possible to navigate around the Web and find databases containing information indispensable to both basic and clinical neuroscientists. This article reviews some Web sites of particular interest. NEUROSCIENTIST 3:276–280, 1997


Author(s):  
Américo Sampaio

The growth of the Internet and the World Wide Web has contributed to significant changes in many areas of our society. The Web has provided new ways of doing business, and many companies have been offering new services as well as migrating their systems to the Web. The main goal of the first Web sites was to facilitate the sharing of information between computers around the world. These Web sites were mainly composed of simple hypertext documents containing information in text format and links to other documents that could be spread all over the world. The first users of this new technology were university researchers interested in some easier form of publishing their work, and also searching for other interesting research sources from other universities.


2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Author(s):  
Bill Karakostas ◽  
Yannis Zorgios

Chapter II presented the main concepts underlying business services. Ultimately, as this book proposes, business services need to be decomposed into networks of executable Web services. Web services are the primary software technology available today that closely matches the characteristics of business services. To understand the mapping from business to Web services, we need to understand the fundamental characteristics of the latter. This chapter therefore will introduce the main Web services concepts and standards. It does not intend to be a comprehensive description of all standards applicable to Web services, as many of them are still in a state of flux. It focuses instead on the more important and stable standards. All such standards are fully and precisely defined and maintained by the organizations that have defined and endorsed them, such as the World Wide Web Consortium (http://w3c. org), the OASIS organization (http://www.oasis-open.org) and others. We advise readers to visit periodically the Web sites describing the various standards to obtain the up to date versions.


Author(s):  
Rafael Cunha Cardoso ◽  
Fernando da Fonseca de Souza ◽  
Ana Carolina Salgado

Currently, systems dedicated to information retrieval/extraction perform an important role on fetching relevant and qualified information from the World Wide Web (WWW). The Semantic Web can be described as the Web’s future once it introduces a set of new concepts and tools. For instance, ontology is used to insert knowledge into contents of the current WWW to give meaning to such contents. This allows software agents to better understand the Web’s content meaning so that such agents can execute more complex and useful tasks to users. This work introduces an architecture that uses some Semantic Web concepts allied to Regular Expressions (REGEX) in order to develop a system that retrieves/extracts specific domain information from the Web. A prototype, based on such architecture, was developed to find information about offers announced on supermarkets Web sites.


10.28945/2556 ◽  
2002 ◽  
Author(s):  
Sanjeev Phukan

Issues of IT Ethics have recently become immensely more complex. The capacity to place material on the World Wide Web has been acquired by a very large number of people. As evolving software has gently hidden the complexities and frustrations that were involved in writing HTML, more and more web sites are being created by people with a relatively modest amount of computer literacy. At the same time, once the initial reluctance to use the Internet and the World Wide Web for commercial purposes had been overcome, sites devoted to doing business on the Internet mushroomed and e-commerce became a term permanently to be considered part of common usage. The assimilation of new technology is almost never smooth. As the Internet begins to grow out of its abbreviated infancy, a multitude of new issues surface continually, and a large proportion of these issues remain unresolved. Many of these issues contain a strong ethics content. As the ability to reach millions of people instantly and simultaneously has passed into the hands of the average person, the rapid emergence of thorny ethical issues is likely to continue unabated.


2017 ◽  
Author(s):  
Yi Liu ◽  
Kwangjo Kim

Since 2004 the term “Web 2.0” has generated a revolution on the World Wide Web and it has developed new ideas, services, application to improve and facilitate communications through the web. Technologies associated with the second-generation of the World Wide Web enable virtually anyone to share their data, documents, observations, and opinions on the Internet. The serious applications of Web 2.0 are sparse and this paper assesses its use in the context of applications, reflections, and collaborative spatial decision-making based on Web generations and in a particular Web 2.0.


1996 ◽  
Vol 1 (1) ◽  
pp. 5-12 ◽  
Author(s):  
Beth E. Barnes

While students at major universities may have access to the World Wide Web via campus computer labs, many have yet to take advantage of the Web's offerings. Regular demonstrations of Web sites were incorporated into an introductory advertising course to pique students’ interest in the Web. This paper discusses how Web site visits were incorporated into lectures and the students’ evaluation of the Web site component of the course.


Sign in / Sign up

Export Citation Format

Share Document