Innovations, Developments, and Applications of Semantic Web and Information Systems - Advances in Web Technologies and Engineering
Latest Publications


TOTAL DOCUMENTS

15
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781522550426, 9781522550433

Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. We mainly adopt four kinds of geospatial data sources to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Author(s):  
Imelda Escamilla ◽  
Miguel Torres Ruíz ◽  
Marco Moreno Ibarra ◽  
Vladimir Luna Soto ◽  
Rolando Quintero ◽  
...  

Human ability to understand approximate references to locations, disambiguated by means of context and reasoning about spatial relationships, is the key to describe spatial environments and to share information about them. In this paper, we propose an approach for geocoding that takes advantage of the spatial relationships contained in the text of tweets, using semantic web, ontologies and spatial analyses. Microblog text has special characteristics (e.g. slang, abbreviations, acronyms, etc.) and thus represents a special variation of natural language. The main objective of this work is to associate spatial relationships found in text with a spatial footprint, to determine the location of the event described in the tweet. The feasibility of the proposal is demonstrated using a corpus of 200,000 tweets posted in Spanish related with traffic events in Mexico City.


Author(s):  
Floriano Scioscia ◽  
Michele Ruta ◽  
Giuseppe Loseto ◽  
Filippo Gramegna ◽  
Saverio Ieva ◽  
...  

The Semantic Web of Things (SWoT) aims to support smart semantics-enabled applications and services in pervasive contexts. Due to architectural and performance issues, most Semantic Web reasoners are often impractical to be ported: they are resource consuming and are basically designed for standard inference tasks on large ontologies. On the contrary, SWoT use cases generally require quick decision support through semantic matchmaking in resource-constrained environments. This paper describes Mini-ME (the Mini Matchmaking Engine), a mobile inference engine designed from the ground up for the SWoT. It supports Semantic Web technologies and implements both standard (subsumption, satisfiability, classification) and non-standard (abduction, contraction, covering, bonus, difference) inference services for moderately expressive knowledge bases. In addition to an architectural and functional description, usage scenarios and experimental performance evaluation are presented on PC (against other popular Semantic Web reasoners), smartphone and embedded single-board computer testbeds.


Author(s):  
Wei Shen ◽  
Jianyong Wang ◽  
Ping Luo ◽  
Min Wang

Relation extraction from the Web data has attracted a lot of attention recently. However, little work has been done when it comes to the enterprise data regardless of the urgent needs to such work in real applications (e.g., E-discovery). One distinct characteristic of the enterprise data (in comparison with the Web data) is its low redundancy. Previous work on relation extraction from the Web data largely relies on the data's high redundancy level and thus cannot be applied to the enterprise data effectively. This chapter reviews related work on relation extraction and introduces an unsupervised hybrid framework REACTOR for semantic relation extraction over enterprise data. REACTOR combines a statistical method, classification, and clustering to identify various types of relations among entities appearing in the enterprise data automatically. REACTOR was evaluated over a real-world enterprise data set from HP that contains over three million pages and the experimental results show its effectiveness.


Author(s):  
Francesco Corcoglioniti ◽  
Marco Rospocher ◽  
Roldano Cattoni ◽  
Bernardo Magnini ◽  
Luciano Serafini

This chapter describes the KnowledgeStore, a scalable, fault-tolerant, and Semantic Web grounded open-source storage system to jointly store, manage, retrieve, and query interlinked structured and unstructured data, especially designed to manage all the data involved in Knowledge Extraction applications. The chapter presents the concept, design, function and implementation of the KnowledgeStore, and reports on its concrete usage in four application scenarios within the NewsReader EU project, where it has been successfully used to store and support the querying of millions of news articles interlinked with billions of RDF triples, both extracted from text and imported from Linked Open Data sources.


Author(s):  
Dora Melo ◽  
Irene Pimenta Rodrigues ◽  
Vitor Beires Nogueira

The Semantic Web as a knowledge base gives to the Question Answering systems the capabilities needed to go well beyond the usual word matching in the documents and find a more accurate answer, without needing the user intervention to interpret the documents returned. In this chapter, the authors introduce a Dialogue Manager that, throughout the analysis of the question and the type of expected answer, provides accurate answers to the questions posed in Natural Language. The Dialogue Manager not only represents the semantics of the questions but also represents the structure of the discourse, including the user intentions and the questions' context, adding the ability to deal with multiple answers and providing justified answers. The system performance is evaluated by comparing with similar question answering systems. Although the test suite is of small dimension, the results obtained are very promising.


Author(s):  
Zenun Kastrati ◽  
Ali Shariq Imran ◽  
Sule Yildirim Yayilgan

The wide use of ontology in different applications has resulted in a plethora of automatic approaches for population and enrichment of an ontology. Ontology enrichment is an iterative process where the existing ontology is continuously updated with new concepts. A key aspect in ontology enrichment process is the concept learning approach. A learning approach can be a linguistic-based, statistical-based, or hybrid-based that employs both linguistic as well as statistical-based learning approaches. This chapter presents a concept enrichment model that combines contextual and semantic information of terms. The proposed model called SEMCON employs a hybrid concept learning approach utilizing functionalities from statistical and linguistic ontology learning techniques. The model introduced for the first time two statistical features that have shown to improve the overall score ranking of highly relevant terms for concept enrichment. The chapter also gives some recommendations and possible future research directions based on the discussion in following sections.


Author(s):  
Andrea Ko ◽  
Saira Gillani

Manual ontology population and enrichment is a complex task that require professional experience involving a lot of efforts. The authors' paper deals with the challenges and possible solutions for semi-automatic ontology enrichment and population. ProMine has two main contributions; one is the semantic-based text mining approach for automatically identifying domain-specific knowledge elements; the other is the automatic categorization of these extracted knowledge elements by using Wiktionary. ProMine ontology enrichment solution was applied in IT audit domain of an e-learning system. After seven cycles of the application ProMine, the number of automatically identified new concepts are significantly increased and ProMine categorized new concepts with high precision and recall.


Author(s):  
Balaji Jagan ◽  
Ranjani Parthasarathi ◽  
Geetha T. V.

Customization of information from web documents is an immense job that involves mainly the shortening of original texts. Extractive methods use surface level and statistical features for the selection of important sentences. In contrast, abstractive methods need a formal semantic representation, where the selection of important components and the rephrasing of the selected components are carried out using the semantic features associated with the words as well as the context. In this paper, we propose a semi-supervised bootstrapping approach for the identification of important components for abstractive summarization. The input to the proposed approach is a fully connected semantic graph of a document, where the semantic graphs are constructed for sentences, which are then connected by synonym concepts and co-referring entities to form a complete semantic graph. The direction of the traversal of nodes is determined by a modified spreading activation algorithm, where the importance of the nodes and edges are decided, based on the node and its connected edges under consideration.


Author(s):  
Michalis Mountantonakis ◽  
Nikos Minadakis ◽  
Yannis Marketakis ◽  
Pavlos Fafalios ◽  
Yannis Tzitzikas

In many applications, one has to fetch and assemble pieces of information coming from more than one source for building a semantic warehouse offering more advanced query capabilities. This chapter describes the corresponding requirements and challenges, and focuses on the aspects of quality, value and evolution of the warehouse. It details various metrics (or measures) for quantifying the connectivity of a warehouse and consequently the warehouse's ability to answer complex queries. The proposed metrics allow someone to get an overview of the contribution (to the warehouse) of each source and to quantify the value of the entire warehouse. Moreover, the paper shows how the metrics can be used for monitoring a warehouse after a reconstruction, thereby reducing the cost of quality checking and understanding its evolution over time. The behaviour of these metrics is demonstrated in the context of a real and operational semantic warehouse for the marine domain. Finally, the chapter discusses novel ways to exploit such metrics in global scale and for visualization purposes.


Sign in / Sign up

Export Citation Format

Share Document