Challenges and opportunities of the Internet for medical oncology.

1996 ◽  
Vol 14 (7) ◽  
pp. 2181-2186 ◽  
Author(s):  
L M Glodé

PURPOSE The internet, and in particular the world wide web (www), has a rapidly increasing potential to provide information for oncologists and their patients about cancer biology and treatment. A brief overview of this environment is given along with examples of how easily the information is accessed as a means of introducing the web page of the American Society of Clinical Oncology (ASCO), ASCO OnLine. METHODS Oncology information sources on the www were accessed from the author's home using a 14.4 kbs modem, Netscape browser (Netscape communications Corp, Mountain View, CA), and the locations recorded for tabulation and discussion. RESULTS Overwhelming amounts of oncology-related information are now available via the Internet. CONCLUSION Oncology as a subspecialty is ideally suited to apply the newest information technology to traditional needs in areas of education, research, and patient care. Oncologists will increasingly act as information guides rather than information resources for patients and their families with cancer.

2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


2001 ◽  
Vol 6 (2) ◽  
pp. 107-110 ◽  
Author(s):  
John P. Young

This paper describes an exploration of utilising the World Wide Web for interactive music. The origin of this investigation was the intermedia work Telemusic #1, by Randall Packer, which combined live performers with live public participation via the Web. During the event, visitors to the site navigated through a virtual interface, and while manipulating elements, projected their actions in the form of triggered sounds into the physical space. Simultaneously, the live audio performance was streamed back out to the Internet participants. Thus, anyone could take part in the collective realisation of the work and hear the musical results in real time. The underlying technology is, to our knowledge, the first standards-based implementation linking the Web with Cycling '74 MAX. Using only ECMAScript/JavaScript, Java, and the OTUDP external from UC Berkeley CNMAT, virtually any conceivable interaction with a Web page can send data to a MAX patch for processing. The code can also be readily adapted to work with Pd, jMAX and other network-enabled applications.


Author(s):  
Mu-Chun Su ◽  
◽  
Shao-Jui Wang ◽  
Chen-Ko Huang ◽  
Pa-ChunWang ◽  
...  

Most of the dramatically increased amount of information available on the World Wide Web is provided via HTML and formatted for human browsing rather than for software programs. This situation calls for a tool that automatically extracts information from semistructured Web information sources, increasing the usefulness of value-added Web services. We present a <u>si</u>gnal-<u>r</u>epresentation-b<u>a</u>sed <u>p</u>arser (SIRAP) that breaks Web pages up into logically coherent groups - groups of information related to an entity, for example. Templates for records with different tag structures are generated incrementally by a Histogram-Based Correlation Coefficient (HBCC) algorithm, then records on a Web page are detected efficiently using templates generated by matching. Hundreds of Web pages from 17 state-of-the-art search engines were used to demonstrate the feasibility of our approach.


Author(s):  
Carmine Sellitto

This chapter provides an overview of some of the criteria that are currently being used to assess medical information found on the World Wide Web (WWW). Drawing from the evaluation frameworks discussed, a simple set of easy to apply criteria is proposed for evaluating on-line medical information. The criterion covers the categories of information accuracy, objectivity, privacy, currency and authority. A checklist for web page assessment and scoring is also proposed, providing an easy to use tool for medical professionals, health consumers and medical web editors.


Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


Author(s):  
Jeremy Padmos ◽  
David Bernstein

To be successful, handheld computers known as Personal Travel Assistants (PTAs) must be connected to external information sources. The viability of using the Internet and the world wide web (www) as such sources is explored. Considerations include whether www is flexible and robust enough to support various travel applications, whether to use a simulated passenger transportation terminal, and whether existing Internet protocols would be appropriate for PTAs.


1999 ◽  
Vol 40 (1) ◽  
pp. 97-104
Author(s):  
Susan Brady

Over the past decade academic and research libraries throughout the world have taken advantage of the enormous developments in communication technology to improve services to their users. Through the Internet and the World Wide Web researchers now have convenient electronic access to library catalogs, indexes, subject bibliographies, descriptions of manuscript and archival collections, and other resources. This brief overview illustrates how libraries are facilitating performing arts research in new ways.


2009 ◽  
Vol 28 (2) ◽  
pp. 81 ◽  
Author(s):  
John Carlo Bertot

<span>Public libraries were early adopters of Internet-based technologies and have provided public access to the Internet and computers since the early 1990s. The landscape of public-access Internet and computing was substantially different in the 1990s as the World Wide Web was only in its initial development. At that time, public libraries essentially experimented with publicaccess Internet and computer services, largely absorbing this service into existing service and resource provision without substantial consideration of the management, facilities, staffing, and other implications of public-access technology (PAT) services and resources. This article explores the implications for public libraries of the provision of PAT and seeks to look further to review issues and practices associated with PAT provision resources. While much research focuses on the amount of public access that </span><span>public libraries provide, little offers a view of the effect of public access on libraries. This article provides insights into some of the costs, issues, and challenges associated with public access and concludes with recommendations that require continued exploration.</span>


2003 ◽  
Vol 92 (3_suppl) ◽  
pp. 1091-1096 ◽  
Author(s):  
Nobuhiko Fujihara ◽  
Asako Miura

The influences of task type on search of the World Wide Web using search engines without limitation of search domain were investigated. 9 graduate and undergraduate students studying psychology (1 woman and 8 men, M age = 25.0 yr., SD = 2.1) participated. Their performance to manipulate the search engines on a closed task with only one answer were compared with their performance on an open task with several possible answers. Analysis showed that the number of actions was larger for the closed task ( M = 91) than for the open task ( M = 46.1). Behaviors such as selection of keywords (averages were 7.9% of all actions for the closed task and 16.7% for the open task) and pressing of the browser's back button (averages were 40.3% of all actions for the closed task and 29.6% for the open task) were also different. On the other hand, behaviors such as selection of hyperlinks, pressing of the home button, and number of browsed pages were similar for both tasks. Search behaviors were influenced by task type when the students searched for information without limitation placed on the information sources.


Sign in / Sign up

Export Citation Format

Share Document