Virtual Communities and the Alignment of Web Ontologies

Author(s):  
Krzysztof Juszczyszyn

The World Wide Web (WWW) is a global, ubiquitous, and fundamentally dynamic environment for information exchange and processing. By connecting vast numbers of individuals, the Web enables creation of virtual communities, and during the last 10 years, became a universal collaboration infrastructure. The so-called Semantic Web, a concept proposed by Tim Berners-Lee, is a new WWW architecture that enhances content with formal semantics (Berners-Lee, Hendler, & Lassila, 2001). Hence, the Web content is made suitable for machine processing (i.e., it is described by the associated metadata), as opposed to HTML documents available only for human consumption. Languages such as Resource Description Framework (RDF) and Ontology Web Language (OWL) along with well-known XML are used for description of Web resources. In other words, the Semantic Web is a vision of the future Web in which information is given explicit meaning. This will enable autonomous software agents to reason about Web content and produce intelligent responses to events (Staab, 2002). The ultimate goal of the next generation’s Web is to support the creation of virtual communities which will be composed of software agents and humans cooperating within the same environment. Sharing knowledge within such a community requires a shared conceptual vocabularies—ontologies, which represent the formal common agreement about the meaning of data (Gomez-Perez & Corcho, 2002). Artificial intelligence defines ontologies as explicit, formal specification of a shared conceptualization (Studer, Benjamins, & Fensel, 1998). In this case, a conceptualization stands for an abstract model of some concept from the real world; explicit means that the type of concept used is explicitly defined. Formal refers to the fact that an ontology should be machine readable; and finally shared means that ontology expresses knowledge that is accepted by all the subjects. In short, an ontology defines the terms used to describe and represent an area of knowledge. However, the shared ontologies must be first constructed by using information from many sources which may be of arbitrary quality. Thus, it is necessary to find a way to seamlessly combine the knowledge from many sources, maybe diverse and heterogeneous. The resultant ontologies enable virtual communities and teams to manage and exchange their knowledge. It should be noted, that the word ontology has been used to describe notions with different degrees of structure—from taxonomies (e.g., Yahoo hierarchy), metadata schemes (e.g., Dublin Core), to logical theories. The Semantic Web needs ontologies with a significant degree of structure. These should allow the specification of at least the following kinds of things: • Concepts (which identify the classes of things like cars or birds) from many domains of interest • The relationships that can exist among concepts • The properties (or attributes) those concepts may have

Author(s):  
Krzysztof Juszczyszyn

The World Wide Web (WWW) is a global, ubiquitous, and fundamentally dynamic environment for information exchange and processing. By connecting vast numbers of individuals, the Web enables creation of virtual communities, and during the last 10 years, became a universal collaboration infrastructure. The so-called Semantic Web, a concept proposed by Tim Berners-Lee, is a new WWW architecture that enhances content with formal semantics (Berners-Lee, Hendler, & Lassila, 2001). Hence, the Web content is made suitable for machine processing (i.e., it is described by the associated metadata), as opposed to HTML documents available only for human consumption. Languages such as Resource Description Framework (RDF) and Ontology Web Language (OWL) along with well-known XML are used for description of Web resources. In other words, the Semantic Web is a vision of the future Web in which information is given explicit meaning. This will enable autonomous software agents to reason about Web content and produce intelligent responses to events (Staab, 2002). The ultimate goal of the next generation’s Web is to support the creation of virtual communities which will be composed of software agents and humans cooperating within the same environment. Sharing knowledge within such a community requires a shared conceptual vocabularies—ontologies, which represent the formal common agreement about the meaning of data (Gomez-Perez & Corcho, 2002). Artificial intelligence defines ontologies as explicit, formal specification of a shared conceptualization (Studer, Benjamins, & Fensel, 1998). In this case, a conceptualization stands for an abstract model of some concept from the real world; explicit means that the type of concept used is explicitly defined. Formal refers to the fact that an ontology should be machine readable; and finally shared means that ontology expresses knowledge that is accepted by all the subjects. In short, an ontology defines the terms used to describe and represent an area of knowledge. However, the shared ontologies must be first constructed by using information from many sources which may be of arbitrary quality. Thus, it is necessary to find a way to seamlessly combine the knowledge from many sources, maybe diverse and heterogeneous. The resultant ontologies enable virtual communities and teams to manage and exchange their knowledge. It should be noted, that the word ontology has been used to describe notions with different degrees of structure—from taxonomies (e.g., Yahoo hierarchy), metadata schemes (e.g., Dublin Core), to logical theories. The Semantic Web needs ontologies with a significant degree of structure. These should allow the specification of at least the following kinds of things: • Concepts (which identify the classes of things like cars or birds) from many domains of interest • The relationships that can exist among concepts • The properties (or attributes) those concepts may have


2011 ◽  
pp. 1-23 ◽  
Author(s):  
J. Cardoso

This chapter gives an overview of the evolution of the Web. Initially, Web pages were intended only for human consumption and were usually displayed on a Web browser. New Internet business models, such as B2B and B2C, required organizations to search for solutions to enable a deep interoperability and integration between their systems and applications. One emergent solution was to define the information on the Web using semantics and ontologies in a way that it could be used by computers not only for display purposes, but also for interoperability and integration. The research community developed standards to semantically describe Web information such as the resource description framework and the Web Ontology Language. Ontologies can assist in communication between human beings, achieve interoperability among software systems, and improve the design and the quality of software systems. These evolving Semantic Web technologies are already being used to build semantic Web based systems such as semantic Web services, semantic integration of tourism information sources, and semantic digital libraries to the development of bioinformatics ontologies.


Author(s):  
Komal Dhulekar ◽  
Madhuri Devrankar

Semantic web is a concept that enables better machine processing of information on the web, by structuring documents written for the web in such a way that they become understandable by machines. This can be used for creating more complex applications (intelligent browsers, more advanced web agents), etc. Semantic modeling languages like the Resource Description Framework (RDF) and topic maps employ XML syntax to achieve this objective. New tools exploit cross domain vocabularies to automatically extract and relate the meta information in a new context. Web Ontology languages like DAML+OIL extend RDF with richer modeling primitives and a provide a technological basis to enable the Semantic Web. The logic languages for Semantic Web are described (which build on the of RDF and ontology languages). They, together with digital signatures, enable a web of trust, which will have levels of trust for its resources and for the rights of access, and will enable generating proofs, for the actions and resources on the web.


Author(s):  
Anu Sharma ◽  
Aarti Singh

Intelligent semantic approaches (i.e., semantic web and software agents) are very useful technologies for adding meaning to the web. Adaptive web is a new era of web targeting to provide customized and personalized view of contents and services to its users. Integration of these two technologies can further add to reasoning and intelligence in recommendation process. This chapter explores the existing work done in the area of applying intelligent approaches to web personalization and highlighting ample scope for application of intelligent agents in this domain for solving many existing issues like personalized content management, user profile learning, modelling, and adaptive interactions with users.


Author(s):  
Franck Cotton ◽  
Daniel Gillman

Linked Open Statistical Metadata (LOSM) is Linked Open Data (LOD) applied to statistical metadata. LOD is a model for identifying, structuring, interlinking, and querying data published directly on the web. It builds on the standards of the semantic web defined by the W3C. LOD uses the Resource Description Framework (RDF), a simple data model expressing content as predicates linking resources between them or with literal properties. The simplicity of the model makes it able to represent any data, including metadata. We define statistical data as data produced through some statistical process or intended for statistical analyses, and statistical metadata as metadata describing statistical data. LOSM promotes discovery and the meaning and structure of statistical data in an automated way. Consequently, it helps with understanding and interpreting data and preventing inadequate or flawed visualizations for statistical data. This enhances statistical literacy and efforts at visualizing statistics.


Author(s):  
Kaleem Razzaq Malik ◽  
Tauqir Ahmad

This chapter will clearly show the need for better mapping techniques for Relational Database (RDB) all the way to Resource Description Framework (RDF). This includes coverage of each data model limitations and benefits for getting better results. Here, each form of data being transform has its own importance in the field of data science. As RDB is well known back end storage for information used to many kinds of applications; especially the web, desktop, remote, embedded, and network-based applications. Whereas, EXtensible Markup Language (XML) in the well-known standard for data for transferring among all computer related resources regardless of their type, shape, place, capability and capacity due to its form is in application understandable form. Finally, semantically enriched and simple of available in Semantic Web is RDF. This comes handy when with the use of linked data to get intelligent inference better and efficient. Multiple Algorithms are built to support this system experiments and proving its true nature of the study.


Author(s):  
Jakub Flotyński ◽  
Athanasios G. Malamos ◽  
Don Brutzman ◽  
Felix G. Hamza-Lup ◽  
Nicholas F. Polys ◽  
...  

The implementation of virtual and augmented reality environments on the web requires integration between 3D technologies and web technologies, which are increasingly focused on collaboration, annotation, and semantics. Thus, combining VR and AR with the semantics arises as a significant trend in the development of the web. The use of the Semantic Web may improve creation, representation, indexing, searching, and processing of 3D web content by linking the content with formal and expressive descriptions of its meaning. Although several semantic approaches have been developed for 3D content, they are not explicitly linked to the available well-established 3D technologies, cover a limited set of 3D components and properties, and do not combine domain-specific and 3D-specific semantics. In this chapter, the authors present the background, concepts, and development of the Semantic Web3D approach. It enables ontology-based representation of 3D content and introduces a novel framework to provide 3D structures in an RDF semantic-friendly format.


Author(s):  
Adélia Gouveia ◽  
Jorge Cardoso

The World Wide Web (WWW) emerged in 1989, developed by Tim Berners-Lee who proposed to build a system for sharing information among physicists of the CERN (Conseil Européen pour la Recherche Nucléaire), the world’s largest particle physics laboratory. Currently, the WWW is primarily composed of documents written in HTML (hyper text markup language), a language that is useful for visual presentation (Cardoso & Sheth, 2005). HTML is a set of “markup” symbols contained in a Web page intended for display on a Web browser. Most of the information on the Web is designed only for human consumption. Humans can read Web pages and understand them, but their inherent meaning is not shown in a way that allows their interpretation by computers (Cardoso & Sheth, 2006). Since the visual Web does not allow computers to understand the meaning of Web pages (Cardoso, 2007), the W3C (World Wide Web Consortium) started to work on a concept of the Semantic Web with the objective of developing approaches and solutions for data integration and interoperability purpose. The goal was to develop ways to allow computers to understand Web information. The aim of this chapter is to present the Web ontology language (OWL) which can be used to develop Semantic Web applications that understand information and data on the Web. This language was proposed by the W3C and was designed for publishing, sharing data and automating data understood by computers using ontologies. To fully comprehend OWL we need first to study its origin and the basic blocks of the language. Therefore, we will start by briefly introducing XML (extensible markup language), RDF (resource description framework), and RDF Schema (RDFS). These concepts are important since OWL is written in XML and is an extension of RDF and RDFS.


Author(s):  
Christopher Walton

In the introductory chapter of this book, we discussed the means by which knowledge can be made available on the Web. That is, the representation of the knowledge in a form by which it can be automatically processed by a computer. To recap, we identified two essential steps that were deemed necessary to achieve this task: 1. We discussed the need to agree on a suitable structure for the knowledge that we wish to represent. This is achieved through the construction of a semantic network, which defines the main concepts of the knowledge, and the relationships between these concepts. We presented an example network that contained the main concepts to differentiate between kinds of cameras. Our network is a conceptualization, or an abstract view of a small part of the world. A conceptualization is defined formally in an ontology, which is in essence a vocabulary for knowledge representation. 2. We discussed the construction of a knowledge base, which is a store of knowledge about a domain in machine-processable form; essentially a database of knowledge. A knowledge base is constructed through the classification of a body of information according to an ontology. The result will be a store of facts and rules that describe the domain. Our example described the classification of different camera features to form a knowledge base. The knowledge base is expressed formally in the language of the ontology over which it is defined. In this chapter we elaborate on these two steps to show how we can define ontologies and knowledge bases specifically for the Web. This will enable us to construct Semantic Web applications that make use of this knowledge. The chapter is devoted to a detailed explanation of the syntax and pragmatics of the RDF, RDFS, and OWL Semantic Web standards. The resource description framework (RDF) is an established standard for knowledge representation on the Web. Taken together with the associated RDF Schema (RDFS) standard, we have a language for representing simple ontologies and knowledge bases on the Web.


Author(s):  
Qiumei Pu ◽  
Yongcun Cao ◽  
Xiuqin Pan ◽  
Siyao Fu ◽  
Zengguang Hou

Agent and Ontology are distinct technologies that arose independent of each other, having their own standards and specifications. The semantics web is one of the popular research areas these days, and is based on the current Web, which adds more semantics to it for the purpose of building the Ontology of Web content. In this regard, application program on Web can make the purpose of cross-platform calculation come true by taking advantage of Ontology. However, agent is a theory able to enhance abstraction of software itself, and as it is know, negotiation protocol is the basic principle in the electronic commerce which has a direct impact on the efficiency of the negotiation. This study examines the communication architecture with negotiation protocol on the Semantic Web. Precisely speaking, agents make computing with Ontology, and the authors define an agent’s communication ontology for this communication framework and semantic web use Ontology to describe the negotiation protocol. In this context, the buyer or seller will be able to improve semantic cognitive in process of negotiation. Also, it can provide an intelligent platform for the information exchange on the same understanding about the content of communication in the electronic negotiation service.


Sign in / Sign up

Export Citation Format

Share Document