Albert Einstein online

First Monday ◽  
1997 ◽  
Author(s):  
Steven M. Friedman

The power of the World Wide Web, it is commonly believed, lies in the vast information it makes available; "Content is king," the mantra runs. This image creates the conception of the Internet as most of us envision it: a vast, horizontal labyrinth of pages which connect almost arbitrarily to each other, creating a system believed to be "democratic" in which anyone can publish Web pages. I am proposing a new, vertical and hierarchical conception of the Web, observing the fact that almost everyone searching for information on the Web has to go through filter Web sites of some sort, such as search engines, to find it. The Albert Einstein Online Web site provides a paradigm for this re-conceptualization of the Web, based on a distinction between the wealth of information and that which organizes it and frames the viewers' conceptions of the information. This emphasis on organization implies that we need a new metaphor for the Internet; the hierarchical "Tree" would be more appropriate organizationally than a chaotic "Web." This metaphor needs to be changed because the current one implies an anarchic and random nature to the Web, and this implication may turn off potential Netizens, who can be scared off by such overwhelming anarchy and the difficulty of finding information.

Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


2020 ◽  
pp. 143-158
Author(s):  
Chris Bleakley

Chapter 8 explores the arrival of the World Wide Web, Amazon, and Google. The web allows users to display “pages” of information retrieved from remote computers by means of the Internet. Inventor Tim Berners-Lee released the first web software for free, setting in motion an explosion in Internet usage. Seeing the opportunity of a lifetime, Jeff Bezos set-up Amazon as an online bookstore. Amazon’s success was accelerated by a product recommender algorithm that selectively targets advertising at users. By the mid-1990s there were so many web sites that users often couldn’t find what they were looking for. Stanford PhD student Larry Page invented an algorithm for ranking search results based on the importance and relevance of web pages. Page and fellow student, Sergey Brin, established a company to bring their search algorithm to the world. Page and Brin - the founders of Google - are now worth US$35-40 billion, each.


Author(s):  
June Tolsby

How can three linguistical methods be used to identify the Web displays of an organization’s knowledge values and knowledge-sharing requirements? This chapter approaches this question by using three linguistical methods to analyse a company’s Web sites; (a) elements from the community of practice theory (CoP), (b) concepts from communication theory, such as modality and transitivity, and (c) elements from discourse analysis. The investigation demonstrates how a company’s use of the Web can promote a work attitude that actually can be considered as an endorsement of a particular organizational behaviour. The Web pages display a particular organizational identity that will be a magnet for some parties and deject others. In this way, a company’s Web pages represent a window to the world that need to be handled with care, since this can be interpreted as a projection of the company’s identity.


Author(s):  
J. Paynter

Historically, information and services can only be obtained through narrow, one to one, phones, and agency-specific shop fronts (Caffrey, 1998). Information technology, especially the Internet, opens possibilities of using methods to distribute information and deliver services on a much grander scale. The Internet provides a foundation for a variety of communications media. The Web is one of the most important media built upon the Internet. It can be accessed from almost anywhere in the world by means of computers and electronic devices; it is possible to elicit more information, establish platforms for online payment, online consultation and e-voting. Security concerns can be overcome by data-authentication technologies. It can deliver government services and encourage greater democracy and engagement from citizens. Governments around the world are exploring the use of Web-based information technology (Grönlund, 2002). Attention has focused on the design and delivery of portals as a major component of government electronic service infrastructures. The N.Z. government portal site (http://www.govt.nz/en/home/) or the Local Government Online Ltd (LGOL) Web site, (www.localgovt.co.nz/AboutCouncils/Councils/ByRegion/) are examples. Since the mid-1990s governments have been tapping the potential of the Internet to improve and governance and service provision. “In 2001, it was estimated that globally there were well over 50,000 official government Web sites with more coming online daily. In 1996 less than 50 official government homepages could be found on the world-wide-Web” (Ronaghan, 2002). Local governments are faced with growing demands of delivering information and services more efficiently and effectively and at low cost. Along with the rapid growth of technological developments, people demand high quality services that reflect their lifestyles and are accessible after normal office hours from home or work. Thus, the goals of delivering electronic government services are to simplify procedures and documentation; eliminate interactions that fail to yield outcomes; extend contact opportunities (i.e., access) beyond office hours and improve relationships with the public (Grönlund, 2002). Having an effective Web presence is critical to the success of local governments moving to adopt new technologies. Of equal importance is the evaluation of Web sites using different manual and automated methodologies and tools. In this study an evaluation of local authority Web sites was conducted to gain a practical understanding of the impact of the Internet on local governments in New Zealand using a tailor-made model specific to local governments. Issues studied focused on the information and services provided by the local authority Web sites. What is more important is whether the local government operations can or are able to support the expectations for speed, service, convenience, and delivery that the Web creates. Through identification of best practice Web sites and a set of evaluation methods and tools, this paper will provide a set of design guidelines to local authorities that would benefit and better meet the needs of their local communities.


Author(s):  
John DiMarco

Web authoring is the process of developing Web pages. The Web development process requires you to use software to create functional pages that will work on the Internet. Adding Web functionality is creating specific components within a Web page that do something. Adding links, rollover graphics, and interactive multimedia items to a Web page creates are examples of enhanced functionality. This chapter demonstrates Web based authoring techniques using Macromedia Dreamweaver. The focus is on adding Web functions to pages generated from Macromedia Fireworks and to overview creating Web pages from scratch using Dreamweaver. Dreamweaver and Fireworks are professional Web applications. Using professional Web software will benefit you tremendously. There are other ways to create Web pages using applications not specifically made to create Web pages. These applications include Microsoft Word and Microsoft PowerPoint. The use of Microsoft applications for Web page development is not covered in this chapter. However, I do provide steps on how to use these applications for Web page authoring within the appendix of this text. If you feel that you are more comfortable using the Microsoft applications or the Macromedia applications simply aren’t available to you yet, follow the same process for Web page conceptualization and content creation and use the programs available to you. You should try to get Web page development skills using Macromedia Dreamweaver because it helps you expand your software skills outside of basic office applications. The ability to create a Web page using professional Web development software is important to building a high-end computer skills set. The main objectives of this chapter are to get you involved in some technical processes that you’ll need to create the Web portfolio. Focus will be on guiding you through opening your sliced pages, adding links, using tables, creating pop up windows for content and using layers and timelines for dynamic HTML. The coverage will not try to provide a complete tutorial set for Macromedia Dreamweaver, but will highlight essential techniques. Along the way you will get pieces of hand coded action scripts and JavaScripts. You can decide which pieces you want to use in your own Web portfolio pages. The techniques provided are a concentrated workflow for creating Web pages. Let us begin to explore Web page authoring.


1997 ◽  
Vol 3 (5) ◽  
pp. 276-280
Author(s):  
Nicholas P. Poolos

There has been an explosion in the number of World Wide Web sites on the Internet dedicated to neuroscience. With a little direction, it is possible to navigate around the Web and find databases containing information indispensable to both basic and clinical neuroscientists. This article reviews some Web sites of particular interest. NEUROSCIENTIST 3:276–280, 1997


2007 ◽  
Vol 16 (05) ◽  
pp. 793-828 ◽  
Author(s):  
JUAN D. VELÁSQUEZ ◽  
VASILE PALADE

Understanding the web user browsing behaviour in order to adapt a web site to the needs of a particular user represents a key issue for many commercial companies that do their business over the Internet. This paper presents the implementation of a Knowledge Base (KB) for building web-based computerized recommender systems. The Knowledge Base consists of a Pattern Repository that contains patterns extracted from web logs and web pages, by applying various web mining tools, and a Rule Repository containing rules that describe the use of discovered patterns for building navigation or web site modification recommendations. The paper also focuses on testing the effectiveness of the proposed online and offline recommendations. An ample real-world experiment is carried out on a web site of a bank.


The latest development of the Internet has brought the world into our hands. Everything happens through internet from passing information to purchasing something. Internet made the world as small circle. This project is also based on internet. This paper shows the importance of chat application in day today life and its impact in technological world. This project is to develop a chat system based on Java multithreading and network concept. The application allows people to transfer messages both in private and public way .It also enables the feature of sharing resources like files, images, videos, etc.This online system is developed to interact or chat with one another on the Internet. It is much more reliable and secure than other traditional systems available. Java, multi threading and client-server concept were used to develop the web based chat application. This application is developed with proper architecture for future enhancement. It can be deployed in all private organizations like Colleges, IT parks, etc.


2001 ◽  
Vol 20 (4) ◽  
pp. 11-18 ◽  
Author(s):  
Cleborne D. Maddux

The Internet and the World Wide Web are growing at unprecedented rates. More and more teachers are authoring school or classroom web pages. Such pages have particular potential for use in rural areas by special educators, children with special needs, and the parents of children with special needs. The quality of many of these pages leaves much to be desired. All web pages, especially those authored by special educators should be accessible for people with disabilities. Many other problems complicate use of the web for all users, whether or not they have disabilities. By taking some simple steps, beginning webmasters can avoid these problems. This article discusses practical solutions to common accessibility problems and other problems seen commonly on the web.


Sign in / Sign up

Export Citation Format

Share Document