Using Semantic Web Concepts to Retrieve Specific Domain Information from the Web

Author(s):  
Rafael Cunha Cardoso ◽  
Fernando da Fonseca de Souza ◽  
Ana Carolina Salgado

Currently, systems dedicated to information retrieval/extraction perform an important role on fetching relevant and qualified information from the World Wide Web (WWW). The Semantic Web can be described as the Web’s future once it introduces a set of new concepts and tools. For instance, ontology is used to insert knowledge into contents of the current WWW to give meaning to such contents. This allows software agents to better understand the Web’s content meaning so that such agents can execute more complex and useful tasks to users. This work introduces an architecture that uses some Semantic Web concepts allied to Regular Expressions (REGEX) in order to develop a system that retrieves/extracts specific domain information from the Web. A prototype, based on such architecture, was developed to find information about offers announced on supermarkets Web sites.

Author(s):  
August-Wilhelm Scheer

The emergence of what we call today the World Wide Web, the WWW, or simply the Web, dates back to 1989 when Tim Berners-Lee proposed a hypertext system to manage information overload at CERN, Switzerland (Berners-Lee, 1989). This article outlines how his approaches evolved into the Web that drives today’s information society and explores its full potentials still ahead. The formerly known wide-area hypertext information retrieval initiative quickly gained momentum due to the fast adoption of graphical browser programs and standardization activities of the World Wide Web Consortium (W3C). In the beginning, based only on the standards of HTML, HTTP, and URL, the sites provided by the Web were static, meaning the information stayed unchanged until the original publisher decided for an update. For a long time, the WWW, today referred to as Web 1.0, was understood as a technical mean to publish information to a vast audience across time and space. Data was kept locally and Web sites were only occasionally updated by uploading files from the client to the Web server. Application software was limited to local desktops and operated only on local data. With the advent of dynamic concepts on server-side (script languages like hypertext preprocessor (PHP) or Perl and Web applications with JSP or ASP) and client-side (e.g., JavaScript), the WWW became more dynamic. Server-side content management systems (CMS) allowed editing Web sites via the browser during run-time. These systems interact with multiple users through PHP-interfaces that push information into server-side databases (e.g., mySQL) which again feed Web sites with content. Thus, the Web became accessible and editable not only for programmers and “techies” but also for the common user. Yet, technological limitations such as slow Internet connections, consumer-unfriendly Internet rates, and poor multimedia support still inhibited a mass-usage of the Web. It needed broad-band Internet access, flat rates, and digitalized media processing to catch on.


Author(s):  
Bill Karakostas ◽  
Yannis Zorgios

Chapter II presented the main concepts underlying business services. Ultimately, as this book proposes, business services need to be decomposed into networks of executable Web services. Web services are the primary software technology available today that closely matches the characteristics of business services. To understand the mapping from business to Web services, we need to understand the fundamental characteristics of the latter. This chapter therefore will introduce the main Web services concepts and standards. It does not intend to be a comprehensive description of all standards applicable to Web services, as many of them are still in a state of flux. It focuses instead on the more important and stable standards. All such standards are fully and precisely defined and maintained by the organizations that have defined and endorsed them, such as the World Wide Web Consortium (http://w3c. org), the OASIS organization (http://www.oasis-open.org) and others. We advise readers to visit periodically the Web sites describing the various standards to obtain the up to date versions.


Author(s):  
Kevin Curran ◽  
Gary Gumbleton

Tim Berners-Lee, director of the World Wide Web Consortium (W3C), states that, “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” (Berners-Lee, 2001). The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents, roaming from page to page, can readily carry out sophisticated tasks for users. The Semantic Web (SW) is a vision of the Web where information is more efficiently linked up in such a way that machines can more easily process it. It is generating interest not just because Tim Berners-Lee is advocating it, but because it aims to solve the problem of information being hidden away in HTML documents, which are easy for humans to get information out of but are difficult for machines to do so. We will discuss the Semantic Web here.


Author(s):  
Esharenana E. Adomi

The World Wide Web (WWW) has led to the advent of the information age. With increased demand for information from various quarters, the Web has turned out to be a veritable resource. Web surfers in the early days were frustrated by the delay in finding the information they needed. The first major leap for information retrieval came from the deployment of Web search engines such as Lycos, Excite, AltaVista, etc. The rapid growth in the popularity of the Web during the past few years has led to a precipitous pronouncement of death for the online services that preceded the Web in the wired world.


1997 ◽  
Vol 3 (5) ◽  
pp. 276-280
Author(s):  
Nicholas P. Poolos

There has been an explosion in the number of World Wide Web sites on the Internet dedicated to neuroscience. With a little direction, it is possible to navigate around the Web and find databases containing information indispensable to both basic and clinical neuroscientists. This article reviews some Web sites of particular interest. NEUROSCIENTIST 3:276–280, 1997


1996 ◽  
Vol 1 (1) ◽  
pp. 5-12 ◽  
Author(s):  
Beth E. Barnes

While students at major universities may have access to the World Wide Web via campus computer labs, many have yet to take advantage of the Web's offerings. Regular demonstrations of Web sites were incorporated into an introductory advertising course to pique students’ interest in the Web. This paper discusses how Web site visits were incorporated into lectures and the students’ evaluation of the Web site component of the course.


Author(s):  
Rui G. Pereira ◽  
Mario M. Freire

The World Wide Web (WWW, Web, or W3) is known as the largest accessible repository of human knowledge. It contains around 3 billion documents, which may be accessed by more than 500 million worldwide users. In only 13 years since its appearance in 1991, the Web suffered such a huge growth that it is safe to say there is no phenomenon in history that can compare to it. It reached such importance that it became an indispensable partner in the lives of people (Daconta, Obrst & Smith, 2003).


Author(s):  
Ross A. Malaga

This chapter examines the role of the World Wide Web in traditional lecture based courses. It details a student oriented approach to the development and maintenance of course Web sites. An experiment was conducted in order to determine if use of a course Web site improves student performance. The surprising results, that students in certain sections did not use the site at all, are analyzed. It was concluded that using the Web in class and making Web assignments part of student’s graded work may impact use of a course Web site.


Author(s):  
Thomas A. Slivinski ◽  
Francis D. Tuggle

The World Wide Web (web) grows apace, yet many web sites possess a confusing design, frustrating would-be users. We offer a structured approach, called SPUD (Site - Purpose Users Design) to the task of designing, not implementing, web sites. Our methodology focuses upon a structured walkthrough of a web site and consists of three phases (and 15 substeps), with possible iteration between the stages. (1) Define the audience characteristics of users of the web site, including their motives for visiting, their demographics, and their likely technological capabilities. (2) Plan accordingly the structure of the web site, the page layouts, and the navigation procedures between pages. (3) Develop and test functions useful to the users of the web site, such as a search function (for a complex web site) or an order function (for a retail web site).


2008 ◽  
Vol 8 (3) ◽  
pp. 247-248 ◽  
Author(s):  
MASSIMO MARCHIORI

The World Wide Web is nowadays the most famous and widespread information system. Its success is witnessed by its enormous size and rate of growth: however, the same success of the Web has brought to a situation where more sophisticated techniques are urgently needed to properly handle this mass of information. In this sense, the more ambitious plan for an evolution of a Web is the so called Semantic Web, envisioned by the inventor of the Web itself, Tim Berners-Lee. In this architectural vision, there is the need for further layers of semantics, properly enriching the data that now overflow the classic Web: ontologies, rules, logic, proofs, trust are all ingredients of this ambitious picture. Given these premises, it should not come as a surprise the fact that this evolution is bringing the Web closer and closer to another field, that since quite some time has been facing similar problems of logical organization of knowledge: logic programming. Early examples, like the Metalog system in the World Wide Web Consortium (W3C), had shown that connecting logic programming and the Semantic Web was quite a natural and fruitful step: and in fact, the burst of research in Semantic Web developments has eventually started to touch, connect and reinterprete many topics that were and are mainstream of the logic programming area. We feel this is a necessary progression, as the Semantic Web, and more generally the Web of the future, has a lot to learn from research in the logic programming area. And, conversely, in these new scenarios there are lot of new applied problems that can be challenging and rewarding from a logic programming perspective. This calls for a tighter interaction between the Web and logic programming, which was the reason to motivate this special issue as well: gathering together a selection of the best contributions that could showcase the potential of the cross-breeding.


Sign in / Sign up

Export Citation Format

Share Document