A Framework for Ontology-Based Top-K Global Schema Generation

2016 ◽  
Vol 6 (1) ◽  
pp. 31-53 ◽  
Author(s):  
Longzhuang Li ◽  
Yuzhe Wei ◽  
Feng Tian
Keyword(s):  
2012 ◽  
Vol 13 (2) ◽  
pp. 227-252 ◽  
Author(s):  
MARCO MANNA ◽  
FRANCESCO RICCA ◽  
GIORGIO TERRACINA

AbstractA data integration system provides transparent access to different data sources by suitably combining their data, and providing the user with a unified view of them, called global schema. However, source data are generally not under the control of the data integration process; thus, integrated data may violate global integrity constraints even in the presence of locally consistent data sources. In this scenario, it may be anyway interesting to retrieve as much consistent information as possible. The process of answering user queries under global constraint violations is called consistent query answering (CQA). Several notions of CQA have been proposed, e.g., depending on whether integrated information is assumed to be sound, complete, exact, or a variant of them. This paper provides a contribution in this setting: it uniforms solutions coming from different perspectives under a common Answer-Set Programming (ASP)-based core, and provides query-driven optimizations designed for isolating and eliminating inefficiencies of the general approach for computing consistent answers. Moreover, the paper introduces some new theoretical results enriching existing knowledge on the decidability and complexity of the considered problems. The effectiveness of the approach is evidenced by experimental results.


1993 ◽  
Vol 13 ◽  
pp. 425-443
Author(s):  
Michael A. Maggiotto ◽  
Gary D. Wekkin

One intention of American federalism, according to Madison, was to provide different contexts into which politics might be organized. Segmented partisanship is a reflection of and a response to the differentiation of power, roles and opportunities that federalism made possible. Accepting partisanship as a collection of schemata, choice among which is contextually determined, permits us to see a greater consistency among performance evaluations and electoral decisions, on the one hand, and partisanship on the other, than a single, global schema allows.


Author(s):  
Djamila Marouf ◽  
Djamila Hamdadou ◽  
Karim Bouamrane

Massive data to facilitate decision making for organizations and their corporate users exist in many forms, types and formats. Importantly, the acquisition and retrieval of relevant supporting information should be timely, precise and complete. Unfortunately, due to differences in syntax and semantics, the extraction and integration of available semi-structured data from different sources often fail. Needs for seamless and effective data integration so as to access, retrieve and use information from diverse data sources cannot be overly emphasized. Moreover, information external to organizations may also often have to be sourced for the intended users through a smart data integration system. Owing to the open, dynamic and heterogeneity nature of data, data integration is becoming an increasingly complex process. A new data integration approach encapsulating mediator systems and data warehouse is proposed here. Aside from the heterogeneity of data sources, other data integration design problems include distinguishing the definition of the global schema, the mappings and query processing. In order to meet all of these challenges, the authors of this paper advocate an approach named MAV-ES, which is characterized by an architecture based on a global schema, partial schemas and a set of sources. The primary benefit of this architecture is that it combines the two basic GAV and LAV approaches so as to realize added-value benefits of the mixed approach.


Author(s):  
Richard Millham

Data is an integral part of most business-critical applications. As business data increases in volume and in variety due to technological, business, and other factors, managing this diverse volume of data becomes more difficult. A new paradigm, data virtualization, is used for data management. Although a lot of research has been conducted on developing techniques to accurately store huge amounts of data and to process this data with optimal resource utilization, research remains on how to handle divergent data from multiple data sources. In this chapter, the authors first look at the emerging problem of “big data” with a brief introduction to the emergence of data virtualization and at an existing system that implements data virtualization. Because data virtualization requires techniques to integrate data, the authors look at the problems of divergent data in terms of value, syntax, semantic, and structural differences. Some proposed methods to help resolve these differences are examined in order to enable the mapping of this divergent data into a homogeneous global schema that can more easily be used for big data analysis. Finally, some tools and industrial examples are given in order to demonstrate different approaches of heterogeneous data integration.


2002 ◽  
Author(s):  
Frank Hill ◽  
Andre Csillaghy ◽  
Robert D. Bentley ◽  
Jean Aboudarham ◽  
Ester Antonucci ◽  
...  
Keyword(s):  

1984 ◽  
Vol 9 (3-4) ◽  
pp. 237-240 ◽  
Author(s):  
Wolfgang Effelsberg ◽  
Michael V Mannino

Sign in / Sign up

Export Citation Format

Share Document