Knowledge Acquisition from Semantically Heterogeneous Data

Author(s):  
Doina Caragea ◽  
Vasant Honavar

Recent advances in sensors, digital storage, computing and communications technologies have led to a proliferation of autonomously operated, geographically distributed data repositories in virtually every area of human endeavor, including e-business and e-commerce, e-science, e-government, security informatics, etc. Effective use of such data in practice (e.g., building useful predictive models of consumer behavior, discovery of factors that contribute to large climatic changes, analysis of demographic factors that contribute to global poverty, analysis of social networks, or even finding out what makes a book a bestseller) requires accessing and analyzing data from multiple heterogeneous sources. The Semantic Web enterprise (Berners-Lee et al., 2001) is aimed at making the contents of the Web machine interpretable, so that heterogeneous data sources can be used together. Thus, data and resources on the Web are annotated and linked by associating meta data that make explicit the ontological commitments of the data source providers or, in some cases, the shared ontological commitments of a small community of users. Given the autonomous nature of the data sources on the Web and the diverse purposes for which the data are gathered, in the absence of a universal ontology it is inevitable that there is no unique global interpretation of the data, that serves the needs of all users under all scenarios. Many groups have attempted to develop, with varying degrees of success, tools for flexible integration and querying of data from semantically disparate sources (Levy, 2000; Noy, 2004; Doan, & Halevy, 2005), as well as techniques for discovering semantic correspondences between ontologies to assist in this process (Kalfoglou, & Schorlemmer, 2005; Noy and Stuckenschmidt, 2005). These and related advances in Semantic Web technologies present unprecedented opportunities for exploiting multiple related data sources, each annotated with its own meta data, in discovering useful knowledge in many application domains. While there has been significant work on applying machine learning to ontology construction, information extraction from text, and discovery of mappings between ontologies (Kushmerick, et al., 2005), there has been relatively little work on machine learning approaches to knowledge acquisition from data sources annotated with meta data that expose the structure (schema) and semantics (in reference to a particular ontology). However, there is a large body of literature on distributed learning (see (Kargupta, & Chan, 1999) for a survey). Furthermore, recent work (Zhang et al., 2005; Hotho et al., 2003) has shown that in addition to data, the use of meta data in the form of ontologies (class hierarchies, attribute value hierarchies) can improve the quality (accuracy, interpretability) of the learned predictive models. The purpose of this chapter is to precisely define the problem of knowledge acquisition from semantically heterogeneous data and summarize recent advances that have led to a solution to this problem (Caragea et al., 2005).

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Han-Yu Sung ◽  
Yu-Liang Chi

Purpose This study aims to develop a Web-based application system called Infomediary of Taiwanese Indigenous Peoples (ITIP) that can help individuals comprehend the society and culture of indigenous people. The ITIP is based on the use of Semantic Web technologies to integrate a number of data sources, particularly including the bibliographic records of a museum. Moreover, an ontology model was developed to help users search cultural collections by topic concepts. Design/methodology/approach Two issues were identified that needed to be addressed: the integration of heterogeneous data sources and semantic-based information retrieval. Two corresponding methods were proposed: SPARQL federated queries were designed for data integration across the Web and ontology-driven queries were designed to semantically search by knowledge inference. Furthermore, to help users perform searches easily, three searching interfaces, namely, ethnicity, region and topic, were developed to take full advantage of the content available on the Web. Findings Most open government data provides structured but non-resource description framework data, Semantic Web consumers, therefore, require additional data conversion before the data can be used. On the other hand, although the library, archive and museum (LAM) community has produced some emerging linked data, very few data sets are released to the general public as open data. The Semantic Web’s vision of “web of data” remains challenging. Originality/value This study developed data integration from various institutions, including those of the LAM community. The development was conducted based on the mode of non-institution members (i.e. institutional outsiders). The challenges encountered included uncertain data quality and the absence of institutional participation.


Author(s):  
Aikaterini K. Kalou ◽  
Dimitrios A. Koutsomitropoulos

Semantic Mashups constitute a relatively new genre of applications that illustrate the combination of the current trends of the Web, i.e. the Semantic Web and Web 2.0. The great benefit of Semantic mashups lies in the ability to aggregate different and heterogeneous data with rich semantic annotations and due to this, an additional ease of integration. In this paper, the authors attempt to outline the transition from conventional to semantic mashups, analyzing the former's limitations and identifying improvements and contributions which can come in with the advent of the later. Furthermore, the authors survey the background technologies on which semantic mashups are based, like Semantic Web Services and the process of data triplification. The authors also investigate the current trends and efforts put into developing tools and frameworks, which are designed to support users with little programming knowledge in semantic mashup application development, such as Semantic Pipes or Jigs4OWL. After presenting and illustrating the theoretical and technological background of this genre of mashups, the authors look into some use cases and systems. Among others, the authors present their mashup, called Books@HPClab, in which they introduce a personalized semantic service for mashing up information from different on-line bookstores.


2019 ◽  
pp. 230-253
Author(s):  
Ying Zhang ◽  
Chaopeng Li ◽  
Na Chen ◽  
Shaowen Liu ◽  
Liming Du ◽  
...  

Since large amount of geospatial data are produced by various sources and stored in incompatible formats, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. First, we provide a uniform integration paradigm for users to retrieve geospatial data. Then, we align the retrieved geospatial data in the modeling process to eliminate heterogeneity with the help of Karma. Our main contribution focuses on addressing the third problem. Previous work has been done by defining a set of semantic rules for performing the linking process. However, the geospatial data has some specific geospatial relationships, which is significant for linking but cannot be solved by the Semantic Web techniques directly. We take advantage of such unique features about geospatial data to implement the linking process. In addition, the previous work will meet a complicated problem when the geospatial data sources are in different languages. In contrast, our proposed linking algorithms are endowed with translation function, which can save the translating cost among all the geospatial sources with different languages. Finally, the geospatial data is integrated by eliminating data redundancy and combining the complementary properties from the linked records. We mainly adopt four kinds of geospatial data sources, namely, OpenStreetMap(OSM), Wikmapia, USGS and EPA, to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).


Author(s):  
Barbara Catania ◽  
Elena Ferrari

Web is characterized by a huge amount of very heterogeneous data sources, that differ both in media support and format representation. In this scenario, there is the need of an integrating approach for querying heterogeneous Web documents. To this purpose, XML can play an important role since it is becoming a standard for data representation and exchange over the Web. Due to its flexibility, XML is currently being used as an interface language over the Web, by which (part of) document sources are represented and exported. Under this assumption, the problem of querying heterogeneous sources can be reduced to the problem of querying XML data sources. In this chapter, we first survey the most relevant query languages for XML data proposed both by the scientific community and by standardization committees, e.g., W3C, mainly focusing on their expressive power. Then, we investigate how typical Information Retrieval concepts, such as ranking, similarity-based search, and profile-based search, can be applied to XML query languages. Commercial products based on the considered approaches are then briefly surveyed. Finally, we conclude the chapter by providing an overview of the most promising research trends in the fields.


2012 ◽  
pp. 535-578
Author(s):  
Jie Tang ◽  
Duo Zhang ◽  
Limin Yao ◽  
Yi Li

This chapter aims to give a thorough investigation of the techniques for automatic semantic annotation. The Semantic Web provides a common framework that allows data to be shared and reused across applications, enterprises, and community boundaries. However, lack of annotated semantic data is a bottleneck to make the Semantic Web vision a reality. Therefore, it is indeed necessary to automate the process of semantic annotation. In the past few years, there was a rapid expansion of activities in the semantic annotation area. Many methods have been proposed for automating the annotation process. However, due to the heterogeneity and the lack of structure of the Web data, automated discovery of the targeted or unexpected knowledge information still present many challenging research problems. In this chapter, we study the problems of semantic annotation and introduce the state-of-the-art methods for dealing with the problems. We will also give a brief survey of the developed systems based on the methods. Several real-world applications of semantic annotation will be introduced as well. Finally, some emerging challenges in semantic annotation will be discussed.


2010 ◽  
Vol 04 (04) ◽  
pp. 423-451 ◽  
Author(s):  
SUNITHA RAMANUJAM ◽  
VAIBHAV KHADILKAR ◽  
LATIFUR KHAN ◽  
MURAT KANTARCIOGLU ◽  
BHAVANI THURAISINGHAM ◽  
...  

The current buzzword in the Internet community is the Semantic Web initiative proposed by the W3C to yield a Web that is more flexible and self-adapting. However, for the Semantic Web initiative to become a reality, heterogeneous data sources need to be integrated in order to enable access to them in a homogeneous manner. Since a vast majority of data currently resides in relational databases, integrating relational data sources with semantic web technologies is at the top of the list of activities required to realize the semantic web vision. Several efforts exist that publish relational data as Resource Description Framework (RDF) triples; however almost all current work in this arena is uni-directional, presenting data from an underlying relational database into a corresponding virtual RDF store in a read-only manner. An enhancement over previous relational-to-RDF bridging work in the form of bi-directionality support is presented in this paper. The bi-directional bridge proposed here allows RDF data updates specified as triples to be propagated back into the underlying relational database as tuples. Towards this end, we present various algorithms to translate the triples to be updated/inserted/deleted into equivalent relational attributes/tuples whenever possible. Particular emphasis is laid, in this paper, on the translation and update propagation process for triples containing blank nodes and reification nodes, and a platform enhanced with our algorithms, called D2RQ++, through which bi-directional translation can be achieved, is presented.


Digital technology is fast changing in the recent years and with this change, the number of data systems, sources, and formats has also increased exponentially. So the process of extracting data from these multiple source systems and transforming it to suit for various analytics processes is gaining importance at an alarming rate. In order to handle Big Data, the process of transformation is quite challenging, as data generation is a continuous process. In this paper, we extract data from various heterogeneous sources from the web and try to transform it into a form which is vastly used in data warehousing so that it caters to the analytical needs of the machine learning community.


2019 ◽  
Vol 8 (3) ◽  
pp. 7809-7817

Creating a fast domain independent ontology through knowledge acquisition is a key problem to be addressed in the domain of knowledge engineering. Updating and validation is impossible without the intervention of domain experts, which is an expensive and tedious process. Thereby, an automatic system to model the ontology has become essential. This manuscript presents a machine learning model based on heterogeneous data from multiple domains including agriculture, health care, food and banking, etc. The proposed model creates a complete domain independent process that helps in populating the ontology automatically by extracting the text from multiple sources by applying natural language processing and various techniques of data extraction. The ontology instances are classified based on the domain. A Jaccord Relationship extraction process and the Neural Network Approval for Automated Theory is used for retrieval of data, automated indexing, mapping and knowledge discovery and rule generation. The results and solutions show the proposed model can automatically and efficiently construct automated Ontology


Author(s):  
Hadrian Peter

Data warehouses have established themselves as necessary components of an effective IT strategy for large businesses. To augment the streams of data being siphoned from transactional/operational databases warehouses must also integrate increasing amounts of external data to assist in decision support. Modern warehouses can be expected to handle up to 100 Terabytes or more of data. (Berson and Smith, 1997; Devlin, 1998; Inmon 2002; Imhoff et al, 2003; Schwartz, 2003; Day 2004; Peter and Greenidge, 2005; Winter and Burns 2006; Ladley, 2007). The arrival of newer generations of tools and database vendor support has smoothed the way for current warehouses to meet the needs of the challenging global business environment ( Kimball and Ross, 2002; Imhoff et al, 2003; Ross, 2006). We cannot ignore the role of the Internet in modern business and the impact on data warehouse strategies. The web represents the richest source of external data known to man ( Zhenyu et al, 2002; Chakrabarti, 2002; Laender et al, 2002) but we must be able to couple raw text or poorly structured data on the web with descriptions, annotations and other forms of summary meta-data (Crescenzi et al, 2001). In recent years the Semantic Web initiative has focussed on the production of “smarter data”. The basic idea is that instead of making programs with near human intelligence, we rather carefully add meta-data to existing stores so that the data becomes “marked up” with all the information necessary to allow not-sointelligent software to perform analysis with minimal human intervention. (Kalfoglou et al, 2004) The Semantic Web builds on established building block technologies such as Unicode, URIs(Uniform Resource Indicators) and XML (Extensible Markup Language) (Dumbill, 2000; Daconta et al, 2003; Decker et al, 2000). The modern data warehouse must embrace these emerging web initiatives. In this paper we propose a model which provides mechanisms for sourcing external data resources for analysts in the warehouse.


Sign in / Sign up

Export Citation Format

Share Document