Digital Interactive Channel Systems and Portals

Author(s):  
Christoph Schlueter Langdon ◽  
Alexander Bau

Web portals continue to grow as a force that could shift the balance of power between buyers and sellers and, therefore, could alter the structure of channel systems in many industries. In late 2005, the increase in the importance of portals appears to be reflected in their market capitalization, exceeding that of more traditional media and communications companies (see Figure 1). Today, the Internet provides access to a vast data repository. Information on product pricing and quality that used to take hours to unearth can now be accessed in seconds with a click of a mouse. However, despite the ease of data access, one issue remains: how to find that piece of relevant information within all the data. Digital technology has reduced the cost of content creation, which has increased the amount of content or data available (e.g., replacing the typewriter with word processors and desktop publishing). Together with cheap digital distribution via the Internet and Web, much of this data is now available online. What remains is the challenge of finding relevant and reliable information. This issue is being addressed by one of the dominant forces in the online arena, the Web portal.

Author(s):  
John DiMarco

Web authoring is the process of developing Web pages. The Web development process requires you to use software to create functional pages that will work on the Internet. Adding Web functionality is creating specific components within a Web page that do something. Adding links, rollover graphics, and interactive multimedia items to a Web page creates are examples of enhanced functionality. This chapter demonstrates Web based authoring techniques using Macromedia Dreamweaver. The focus is on adding Web functions to pages generated from Macromedia Fireworks and to overview creating Web pages from scratch using Dreamweaver. Dreamweaver and Fireworks are professional Web applications. Using professional Web software will benefit you tremendously. There are other ways to create Web pages using applications not specifically made to create Web pages. These applications include Microsoft Word and Microsoft PowerPoint. The use of Microsoft applications for Web page development is not covered in this chapter. However, I do provide steps on how to use these applications for Web page authoring within the appendix of this text. If you feel that you are more comfortable using the Microsoft applications or the Macromedia applications simply aren’t available to you yet, follow the same process for Web page conceptualization and content creation and use the programs available to you. You should try to get Web page development skills using Macromedia Dreamweaver because it helps you expand your software skills outside of basic office applications. The ability to create a Web page using professional Web development software is important to building a high-end computer skills set. The main objectives of this chapter are to get you involved in some technical processes that you’ll need to create the Web portfolio. Focus will be on guiding you through opening your sliced pages, adding links, using tables, creating pop up windows for content and using layers and timelines for dynamic HTML. The coverage will not try to provide a complete tutorial set for Macromedia Dreamweaver, but will highlight essential techniques. Along the way you will get pieces of hand coded action scripts and JavaScripts. You can decide which pieces you want to use in your own Web portfolio pages. The techniques provided are a concentrated workflow for creating Web pages. Let us begin to explore Web page authoring.


2004 ◽  
Vol 359 (1444) ◽  
pp. 699-710 ◽  
Author(s):  
Malcolm J. Scoble

Taxonomic data form a substantial, but scattered, resource. The alternative to such a fragmented system is a ‘unitary’ one of preferred, consensual classifications. For effective access and distribution the (Web) revision for a given taxon would be established at a single Internet site. Although all the international codes of nomenclature currently preclude the Internet as a valid medium of publication, elements of unitary taxonomy (UT) still exist in the paper system. Much taxonomy, unitary or not, already resides on the Web. Arguments for and against adopting a unitary approach are considered and a resolution is attempted. Rendering taxonomy essentially Web–based is as inevitable as it is desirable. Apparently antithetical to the UT proposal is the view that in reality multiple classifications of the same taxon exist, since different taxonomists often hold different concepts of their taxa: a single name may apply to many different (frequently overlapping) circumscriptions and more than one name to a single taxon. However, novel means are being developed on single Internet sites to retain the diversity of multiple concepts for taxa, providing hope that taxonomy may become established as a Web–based information discipline that will unify the discipline and facilitate data access.


Information Retrieval has become the buzzword in the today’s era of advanced computing. The tremendous amount of information is available over the Internet in the form of documents which can either be structured or unstructured. It is really difficult to retrieve relevant information from such large pool. The traditional search engines based on keyword search are unable to give the desired relevant results as they search the web on the basis of the keywords present in the query fired. On contrary the ontology based semantic search engines provide relevant and quick results to the user as the information stored in the semantic web is more meaningful. The paper gives the comparative study of the ontology based search engines with those which are keyword based. Few of both types have been taken and same queries are run on each one of them to analyze the results to compare the precision of the results provided by them by classifying the results as relevant or non-relevant.


2003 ◽  
Vol 43 (1) ◽  
pp. 693
Author(s):  
P.E. Williamson ◽  
C.B. Foster

During the past 10 years, Australia has maintained 65– 85% self-sufficiency in oil and better than 100% sufficiency in gas. This has generated significant societal benefits in terms of employment, balance of payments, and revenue. The decline of the super-giant Gippsland fields, discovery of smaller oil pools on the North West Shelf, and the increasing reliance on condensate to sustain our liquids supply, however, sharpens the focus on Australia’s need to increase exploration and discover more oil. Australia is competing in the global market place for exploration funds, but as it is relatively underexplored there is a need to simulate interest through access to pre-competitive data and information. Public access to exploration and production data is a key plank in Australian promotion of petroleum exploration acreage. Access results from legislation that initially subsidised exploration in return for lodgement and public availability of exploration and production (E&P) data. Today publicly available E&P data ranges from digital seismic tapes, to core and cuttings samples from wells, and access to relational databases, including organic geochemistry, biostratigraphy, and reservoir and shows information. Seismic information is being progressively consolidated to high density media. Under the Commonwealth Government’s Spatial Information and Data Access Policy, announced in 2001, company data are publicly available at the cost of transfer, after a relatively brief confidentiality period. In addition, pre-competitive regional studies relating to petroleum prospectivity, undertaken by Government, and databases and spatial information are free over the Internet, further reducing the cost of exploration. In cooperation with the Australian States and the Northern Territory, we are working towards jointly presenting Australian opportunities through the Geoscience Portal (http:// www.geoscience.gov.au) and a virtual one-stop data repository. The challenge now is to translate data availability to increased exploration uptake, through client information, and through ever-improving on-line access.


Author(s):  
Vijay Kasi ◽  
Radhika Jain

In the context of the Internet, a search engine can be defined as a software program designed to help one access information, documents, and other content on the World Wide Web. The adoption and growth of the Internet in the last decade has been unprecedented. The World Wide Web has always been applauded for its simplicity and ease of use. This is evident looking at the extent of the knowledge one requires to build a Web page. The flexible nature of the Internet has enabled the rapid growth and adoption of it, making it hard to search for relevant information on the Web. The number of Web pages has been increasing at an astronomical pace, from around 2 million registered domains in 1995 to 233 million registered domains in 2004 (Consortium, 2004). The Internet, considered a distributed database of information, has the CRUD (create, retrieve, update, and delete) rule applied to it. While the Internet has been effective at creating, updating, and deleting content, it has considerably lacked in enabling the retrieval of relevant information. After all, there is no point in having a Web page that has little or no visibility on the Web. Since the 1990s when the first search program was released, we have come a long way in terms of searching for information. Although we are currently witnessing a tremendous growth in search engine technology, the growth of the Internet has overtaken it, leading to a state in which the existing search engine technology is falling short. When we apply the metrics of relevance, rigor, efficiency, and effectiveness to the search domain, it becomes very clear that we have progressed on the rigor and efficiency metrics by utilizing abundant computing power to produce faster searches with a lot of information. Rigor and efficiency are evident in the large number of indexed pages by the leading search engines (Barroso, Dean, & Holzle, 2003). However, more research needs to be done to address the relevance and effectiveness metrics. Users typically type in two to three keywords when searching, only to end up with a search result having thousands of Web pages! This has made it increasingly hard to effectively find any useful, relevant information. Search engines face a number of challenges today requiring them to perform rigorous searches with relevant results efficiently so that they are effective. These challenges include the following (“Search Engines,” 2004). 1. The Web is growing at a much faster rate than any present search engine technology can index. 2. Web pages are updated frequently, forcing search engines to revisit them periodically. 3. Dynamically generated Web sites may be slow or difficult to index, or may result in excessive results from a single Web site. 4. Many dynamically generated Web sites are not able to be indexed by search engines. 5. The commercial interests of a search engine can interfere with the order of relevant results the search engine shows. 6. Content that is behind a firewall or that is password protected is not accessible to search engines (such as those found in several digital libraries).1 7. Some Web sites have started using tricks such as spamdexing and cloaking to manipulate search engines to display them as the top results for a set of keywords. This can make the search results polluted, with more relevant links being pushed down in the result list. This is a result of the popularity of Web searches and the business potential search engines can generate today. 8. Search engines index all the content of the Web without any bounds on the sensitivity of information. This has raised a few security and privacy flags. With the above background and challenges in mind, we lay out the article as follows. In the next section, we begin with a discussion of search engine evolution. To facilitate the examination and discussion of the search engine development’s progress, we break down this discussion into the three generations of search engines. Figure 1 depicts this evolution pictorially and highlights the need for better search engine technologies. Next, we present a brief discussion on the contemporary state of search engine technology and various types of content searches available today. With this background, the next section documents various concerns about existing search engines setting the stage for better search engine technology. These concerns include information overload, relevance, representation, and categorization. Finally, we briefly address the research efforts under way to alleviate these concerns and then present our conclusion.


Author(s):  
Thomas Mandl

Empirical methods in human-computer interaction (HCI) are very expensive, and the large number of information systems on the Internet requires great efforts for their evaluation. Automatic methods try to evaluate the quality of Web pages without human intervention in order to reduce the cost for evaluation. However, automatic evaluation of an interface cannot replace usability testing and other elaborated methods. Many definitions for the quality of information products are discussed in the literature. The user interface and the content are inseparable on the Web, and as a consequence, their evaluation cannot always be separated easily. Thus, content and interface are usually considered as two aspects of quality and are assessed together. A helpful quality definition in this context is provided by Huang, Lee, and Wang (1999). It is shown in Table 1.


2014 ◽  
Vol 3 (2) ◽  
pp. 192-217
Author(s):  
Murat Akser

This paper is an attempt in interpreting the relationship between the adoption of new communications technologies such as the internet and how they are transformed and used in expression of a resisting cultural identity through content creation, namely internet flash animation in Turkey. The study discusses the Turkish adaptation of media of communication as social practice and as a means of social resistance and cultural expression. Its main focus is on internet use and especially around the use of humorous animated stories on the web.


Author(s):  
Titiana Ertiö ◽  
Iida Kukkonen ◽  
Pekka Räsänen

In the Web 2.0 era, consumers of media are no longer mere recipients of digital content, but rather active commentators and cocreators online. However, the Internet rule predicts that 90% of users are passive ‘lurkers’, 9% edit content, and 1% actually create content. This study investigates Finns’ social media activities that apply to content creation, as well as the level of content engagement and sharing. The data come from Statistics Finland and are representative of the Finnish population between the ages of 16 and 74. The results show that Finnish users perceive themselves predominantly as occasional commentators of social media posts. Dissecting the social media activities users engage in, commenting posts is the most popular activity. Gender, age, and education best explain the differences between the types of social media activities investigated. Overall, the study shows that Finns actively engage in different types of online activities as well as the pervasiveness of sociodemographic variables in Finland.


2015 ◽  
Vol 1 (2) ◽  
Author(s):  
Wesley Mendes-Da-Silva

Over the last twenty years the world has experienced significant growth in the supply of knowledge as a result of the advent of the Internet and there has been a drastic reduction in the cost of acquiring or constructing relevant information. This has meant that various industries, like banks, commerce and even the public management sector have undergone a reconfiguration process. Similarly, universities and the publishers of scientific periodicals need to reflect on their future. After all, who is prepared to pay for content that can be freely accessed? In the wake of the change in the technological paradigm that characterizes communication, and driven by financial crises, we find the topic of Financial Innovation (Lerner, 2006). But this topic was already on the agenda even before the Internet appeared on the scene (Miller, 1986). At the beginning of May 2013, when we started putting together the Journal of Financial Innovation (JoFI) an article entitled “Free-for-all”, which was published in the important British publication, The Economist, discussed the growth of open access scientific journals. At the time the British magazine stressed the practice adopted in the UK, which established open access journals as being the destination for research results. In essence, what is intended is to constitute a quality publication route without readers or authors being burdened with high costs, an area that is still responsible for large portions of the billionaire publishing market around the world. 


1998 ◽  
Vol 07 (02n03) ◽  
pp. 187-214 ◽  
Author(s):  
TIZIANA CATARCI ◽  
DANIELE NARDI ◽  
GIUSEPPE SANTUCCI ◽  
S. K. CHANG

The Internet revolution has made an enormous quantity of information available to a disparate variety of people. The amount of information, the typical access modality (that is, browsing), and the rapid growth of the Net, force the user, while searching for the information of interest, to dip into multiple sources, in a labyrinth of millions of links. Web-at-A-Glance (WAG) is a system allowing the user to query (instead of browsing) the Web. WAG performs this ambitious task by constructing a personalized database, pertinent to the user's interests. The system semi-automatically gleans the most relevant information from a Web site or several Web sites, stores them into a database cooperatively designed with the user, and allows her/him to query such a database through a visual interface equipped with a powerful multimedia query language. This paper presents the design philosophy, the architecture and the core of the WAG system. A prototype WAG is being implemented to test the feasibility of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document