scholarly journals Integrated Ontologies for Spatial Scene Descriptions

2013 ◽  
pp. 1751-1763 ◽  
Author(s):  
Sotirios Batsakis ◽  
Euripides G.M. Petrakis

Scene descriptions are typically expressed in natural language texts and are integrated within Web pages, books, newspapers, and other means of content dissemination. The capabilities of such means can be enhanced to support automated content processing and communication between people or machines by allowing the scene contents to be extracted and expressed in ontologies, a formal syntax rich in semantics interpretable by both people and machines. Ontologies enable more effective querying, reasoning, and general use of content and allow for standardizing the quality and delivery of information across communicating information sources. Ontologies are defined using the well-established standards of the Semantic Web for expressing scene descriptions in application fields such as Geographic Information Systems, medicine, and the World Wide Web (WWW). Ontologies are not only suitable for describing static scenes with static objects (e.g., in photographs) but also enable representation of dynamic events with objects and properties changing in time (e.g., moving objects in a video). Representation of both static and dynamic scenes by ontologies, as well as querying and reasoning over static and dynamic ontologies are important issues for further research. These are exactly the problems this chapter is dealing with.

Author(s):  
Sotirios Batsakis ◽  
Euripides G.M. Petrakis

Scene descriptions are typically expressed in natural language texts and are integrated within Web pages, books, newspapers, and other means of content dissemination. The capabilities of such means can be enhanced to support automated content processing and communication between people or machines by allowing the scene contents to be extracted and expressed in ontologies, a formal syntax rich in semantics interpretable by both people and machines. Ontologies enable more effective querying, reasoning, and general use of content and allow for standardizing the quality and delivery of information across communicating information sources. Ontologies are defined using the well-established standards of the Semantic Web for expressing scene descriptions in application fields such as Geographic Information Systems, medicine, and the World Wide Web (WWW). Ontologies are not only suitable for describing static scenes with static objects (e.g., in photographs) but also enable representation of dynamic events with objects and properties changing in time (e.g., moving objects in a video). Representation of both static and dynamic scenes by ontologies, as well as querying and reasoning over static and dynamic ontologies are important issues for further research. These are exactly the problems this chapter is dealing with.


Author(s):  
Mu-Chun Su ◽  
◽  
Shao-Jui Wang ◽  
Chen-Ko Huang ◽  
Pa-ChunWang ◽  
...  

Most of the dramatically increased amount of information available on the World Wide Web is provided via HTML and formatted for human browsing rather than for software programs. This situation calls for a tool that automatically extracts information from semistructured Web information sources, increasing the usefulness of value-added Web services. We present a <u>si</u>gnal-<u>r</u>epresentation-b<u>a</u>sed <u>p</u>arser (SIRAP) that breaks Web pages up into logically coherent groups - groups of information related to an entity, for example. Templates for records with different tag structures are generated incrementally by a Histogram-Based Correlation Coefficient (HBCC) algorithm, then records on a Web page are detected efficiently using templates generated by matching. Hundreds of Web pages from 17 state-of-the-art search engines were used to demonstrate the feasibility of our approach.


2000 ◽  
Vol 09 (04) ◽  
pp. 361-382 ◽  
Author(s):  
DIETER FENSEL ◽  
JÜRGEN ANGELE ◽  
STEFAN DECKER ◽  
MICHAEL ERDMANN ◽  
HANS-PETER SCHNURR ◽  
...  

Ontobroker applies Artificial Intelligence techniques to improve access to heterogeneous, distributed and semi-structured information sources as they are presented in the World Wide Web or organization-wide intranets. It relies on the use of ontologies to annotate web pages, formulate queries and derive answers. In this paper we will briefly sketch Ontobroker. Then we will discuss its main shortcomings, i.e. we will share the lessons we learned from our exercise. We will also show how On2broker overcomes these limitations. Most important is the separation of the query and inference engines and the integration of new web standards like XML and RDF.


2003 ◽  
Vol 92 (3_suppl) ◽  
pp. 1091-1096 ◽  
Author(s):  
Nobuhiko Fujihara ◽  
Asako Miura

The influences of task type on search of the World Wide Web using search engines without limitation of search domain were investigated. 9 graduate and undergraduate students studying psychology (1 woman and 8 men, M age = 25.0 yr., SD = 2.1) participated. Their performance to manipulate the search engines on a closed task with only one answer were compared with their performance on an open task with several possible answers. Analysis showed that the number of actions was larger for the closed task ( M = 91) than for the open task ( M = 46.1). Behaviors such as selection of keywords (averages were 7.9% of all actions for the closed task and 16.7% for the open task) and pressing of the browser's back button (averages were 40.3% of all actions for the closed task and 29.6% for the open task) were also different. On the other hand, behaviors such as selection of hyperlinks, pressing of the home button, and number of browsed pages were similar for both tasks. Search behaviors were influenced by task type when the students searched for information without limitation placed on the information sources.


2002 ◽  
Vol 7 (1) ◽  
pp. 9-25 ◽  
Author(s):  
Moses Boudourides ◽  
Gerasimos Antypas

In this paper we are presenting a simple simulation of the Internet World-Wide Web, where one observes the appearance of web pages belonging to different web sites, covering a number of different thematic topics and possessing links to other web pages. The goal of our simulation is to reproduce the form of the observed World-Wide Web and of its growth, using a small number of simple assumptions. In our simulation, existing web pages may generate new ones as follows: First, each web page is equipped with a topic concerning its contents. Second, links between web pages are established according to common topics. Next, new web pages may be randomly generated and subsequently they might be equipped with a topic and be assigned to web sites. By repeated iterations of these rules, our simulation appears to exhibit the observed structure of the World-Wide Web and, in particular, a power law type of growth. In order to visualise the network of web pages, we have followed N. Gilbert's (1997) methodology of scientometric simulation, assuming that web pages can be represented by points in the plane. Furthermore, the simulated graph is found to possess the property of small worlds, as it is the case with a large number of other complex networks.


2011 ◽  
pp. 203-212
Author(s):  
Luis V. Casaló ◽  
Carlos Flavián ◽  
Miguel Guinalíu

Individuals are increasingly turning to computermediated communication in order to get information on which to base their decisions. For instance, many consumers are using newsgroups, chat rooms, forums, e-mail list servers, and other online formats to share ideas, build communities and contact other consumers who are seen as more objective information sources (Kozinets, 2002). These social groups have been traditionally called virtual communities. The virtual community concept is almost as old as the concept of Internet. However, the exponential development of these structures occurred during the nineties (Flavián & Guinalíu, 2004) due to the appearance of the World Wide Web and the spreading of other Internet tools such as e-mail or chats. The justification of this expansion is found in the advantages generated by the virtual communities to both the members and the organizations that create them.


2011 ◽  
pp. 178-184
Author(s):  
David Parry

The World Wide Web (WWW) is a critical source of information for healthcare. Because of this, systems for allowing increased efficiency and effectiveness of information retrieval and discovery are critical. Increased intelligence in web pages will allow information sharing and discovery to become vastly more efficient .The semantic web is an umbrella term for a series of standards and technologies that will support this development.


2020 ◽  
pp. 143-158
Author(s):  
Chris Bleakley

Chapter 8 explores the arrival of the World Wide Web, Amazon, and Google. The web allows users to display “pages” of information retrieved from remote computers by means of the Internet. Inventor Tim Berners-Lee released the first web software for free, setting in motion an explosion in Internet usage. Seeing the opportunity of a lifetime, Jeff Bezos set-up Amazon as an online bookstore. Amazon’s success was accelerated by a product recommender algorithm that selectively targets advertising at users. By the mid-1990s there were so many web sites that users often couldn’t find what they were looking for. Stanford PhD student Larry Page invented an algorithm for ranking search results based on the importance and relevance of web pages. Page and fellow student, Sergey Brin, established a company to bring their search algorithm to the world. Page and Brin - the founders of Google - are now worth US$35-40 billion, each.


Author(s):  
Alison Harcourt ◽  
George Christou ◽  
Seamus Simpson

This chapter explains one of the most important components of the web: the development and standardization of Hypertext Markup Language (HTML) and DOM (Document Object Model) which are used for creating web pages and applications. In 1994, Tim Berners-Lee established the World Wide Web consortium (W3C) to work on HTML development. In 1995, the W3C decided to introduce a new standard, WHTML 2.0. However, it was incompatible with the older HTML/WHTML versions. This led to the establishment of Web Hypertext Application Technology Working Group (WHATWG) which worked externally to the W3C. WHATWG developed HTML5 which was adopted by the major browser developers Google, Opera, Mozilla, IBM, Microsoft, and Apple. For this reason, the W3C decided to work on HTML5, leading to a joint WHATWG/W3C working group. This chapter explains the development of HTML and WHATWG’s Living Standard with explanation of ongoing splits and agreements between the two fora. It explains how this division of labour led to W3C focus on the main areas of web architecture, the semantic web, the web of devices, payments applications, and web and television (TV) standards. This has led to the spillover of work to the W3C from the national sphere, notably in the development of copyright protection for TV streaming.


Sign in / Sign up

Export Citation Format

Share Document