scholarly journals A MECHANISM FOR DETECTING PARTIAL INFERENCES IN DATA WAREHOUSES

2021 ◽  
Vol 9 (03) ◽  
pp. 369-378
Author(s):  
Johnson Grace Yenin Edwige ◽  
◽  
Adepo Joel ◽  
Oumtanaga Souleymane ◽  
◽  
...  

Data warehouses are widely used in the fields of Big Data and Business Intelligence for statistics on business activity. Their use through multidimensional queries allows to have aggregated results of the data. The confidential nature of certain data leads malicious people to use means of deduction of this information. Among these means are data inference methods. To solve these security problems, the researchers have proposed several solutions based on the architecture of the warehouses, the design phase, the cuboids of a data cube and the materialized views of multidimensional queries. In this work, we propose a mechanism for detecting inference in data warehouses. The objective of this approach is to highlight partial inferences during the execution of a multidimensional OLAP (Online Analytical Processing) SUM-type multidimensional query. The goal is to prevent a data warehouse user from inferring sensitive information for which he or she has no access rights according to the access control policy in force. Our study improves the model proposed by a previous study carried out by Triki, which proposes an approach based on average deviations. The aim is to propose an optimal threshold to better detect inferences. The results we obtain are better compared to the previous study.

2012 ◽  
Vol 546-547 ◽  
pp. 604-611
Author(s):  
Wei Jin Ge ◽  
Xiao Hui Hu

Hidden credentials are useful in situations where requests for service, credentials, access policies and resources are extremely sensitive. Current research related to hidden credentials has the shortage that the attribute model cannot provide the complex description. This paper presents a hierarchical hidden credential model which combines the attribute tree structure and the hierarchical identity-based encryption. Attribute tree structure is given that is used to organize sensitive information and the hierarchical hidden credential model is applied to carry and transport credentials, sensitive access control policy, and private resource and so on. This model expands the attribute from a simple atom one to an attribute tree. After the evaluation, it is proven that this model overcomes the shortcomings such as high-load network communication, too many credential exchanges which was caused by attribute-based access control policy. The usability and expansibility of hidden credentials were improved also.


Internet of Things (IoT) devices under cloud assistance is deployed in different distributed environment. It collects sensed data and outsources the data to remote server and user for sharing. As IoT is used in important fields like healthcare, business and research, the sensed data are sensitive information which needs to be protected. Encryption is usual technique to protect data from adversaries. A fine grained access control is essential for heterogeneous device involved social network. The existing access control policies were defined for predefined identity and role which needs to be changed in dynamic situations. Moreover, all the necessary policies cannot be defined in advance and new policies were demanded for new situational context. To solve these issues, this work design a model which calculate final trust value based on semantic information dynamically referring to ontology. a access control policy is also designed on semantic role of the device. The semantic technology is used for high level reasoning of the context situation


Author(s):  
Asma Cherif ◽  
Abdessamad Imine

Collaborative applications are important applications, allowing users to cooperate in order to perform a given task. Their importance has grown significantly over the recent years since they are required in many fields. However, they still lack of an appropriate access control mechanism which limits their full potential. It is hard to conceive an access control model for collaborative applications since they need to change dynamically access rights while maintaining high local responsiveness. This chapter presents a decentralized access control model based on replicating the shared document and its access control policy at each collaborating site. The interaction between document updates and authorizations updates is carefully studied to maintain the convergence of the shared data. Our model relies on an optimistic approach to enforce the access control, i.e. users may temporarily violate the access control policy if their rights were revoked concurrently. Illegal operations are undone selectively to eliminate their effects and converge to the same final state of the shared object.


Author(s):  
Lixin Fu ◽  
Wen-Chen Hu

Since the late ’80s and early ’90s, database technologies have evolved to a new level of applications: online analytical processing (OLAP), where executive management can make quick and effective strategic decisions based on knowledge in terms of queries against large amounts of stored data. Some OLAP systems are also regarded as decision support systems (DSSs) or executive information systems (EIS). The traditional, well-established online transactional processing (OLTP) systems such as relational database management systems (RDBMS) mainly deal with mission-critical daily transactions. Typically, there are a large number of short, simple queries such as lookups, insertions, and deletions. The main focus is transaction throughput, consistency, concurrency, and failure recovery issues. OLAP systems, on the other hand, are mainly analytical and informational. OLAP systems are usually closely coupled with data warehouses, which can contain very large data sets that may include historical data as well as data integrated from different departments and geographical locations. So the sizes of data warehouses are usually significantly larger than common OLTP systems. In addition, the workloads of OLAP are quite different from those of traditional transaction systems: The queries are unpredictable and much more complicated. For example, an OLAP query could be, “For each type of car and each manufacturer, list market share change in terms of car sales between the first quarter of 2005 and the first quarter of 2006.” The purpose of these queries is not for the daily operational maintenance of data; instead, it is for deeper knowledge from data used for decision support.


Author(s):  
Jamel Feki

Within today’s competitive economic context, information acquisition, analysis and exploitation became strategic and unavoidable requirements for every enterprise. Moreover, in order to guarantee their persistence and growth, enterprises are forced, henceforth, to capitalize expertise in this domain. Data warehouses (DW) emerged as a potential solution answering the needs of storage and analysis of large data volumes. In fact, a DW is a database system specialized in the storage of data used for decisional ends. This type of systems was proposed to overcome the incapacities of OLTP (On-Line Transaction Processing) systems in offering analysis functionalities. It offers integrated, consolidated and temporal data to perform decisional analyses. However, the different objectives and functionalities between OLTP and DW systems created a need for a development method appropriate for DW. Indeed, data warehouses still deploy considerable efforts and interests of a large community of both software editors of decision support systems (DSS) and researchers (Kimball, 1996; Inmon, 2002). Current software tools for DW focus on meeting end-user needs. OLAP (On-Line Analytical Processing) tools are dedicated to multidimensional analyses and graphical visualization of results (e.g., Oracle Discoverer?); some products permit the description of DW and Data Mart (DM) schemes (e.g., Oracle Warehouse Builder?). One major limit of these tools is that the schemes must be built beforehand and, in most cases, manually. However, such a task can be tedious, error-prone and time-consuming, especially with heterogeneous data sources. On the other hand, the majority of research efforts focuses on particular aspects in DW development, cf., multidimensional modeling, physical design (materialized views (Moody & Kortnik, 2000), index selection (Golfarelli, Rizzi, & Saltarelli 2002), schema partitioning (Bellatreche & Boukhalfa, 2005)) and more recently applying data mining for a better data interpretation (Mikolaj, 2006; Zubcoff, Pardillo & Trujillo, 2007). While these practical issues determine the performance of a DW, other just as important, conceptual issues (e.g., requirements specification and DW schema design) still require further investigations. In fact, few propositions were put forward to assist in and/or to automate the design process of DW, cf., (Bonifati, Cattaneo, Ceri, Fuggetta & Paraboschi, 2001; Hahn, Sapia & Blaschka, 2000; Phipps & Davis 2002; Peralta, Marotta & Ruggia, 2003).


1998 ◽  
Vol 27 (1) ◽  
pp. 21-26 ◽  
Author(s):  
Nick Roussopoulos

2021 ◽  
Vol 17 (4) ◽  
pp. 1-28
Author(s):  
Waqas Ahmed ◽  
Esteban Zimányi ◽  
Alejandro A. Vaisman ◽  
Robert Wrembel

Data warehouses (DWs) evolve in both their content and schema due to changes of user requirements, business processes, or external sources to name a few. Although multiple approaches using temporal and/or multiversion DWs have been proposed to handle these changes, an efficient solution for this problem is still lacking. The authors' approach is to separate concerns and use temporal DWs to deal with content changes, and multiversion DWs to deal with schema changes. To address the former, previously, they have proposed a temporal multidimensional (MD) model. In this paper, they propose a multiversion MD model for schema evolution to tackle the latter problem. The two models complement each other and allow managing both content and schema evolution. In this paper, the semantics of schema modification operators (SMOs) to derive various schema versions are given. It is also shown how online analytical processing (OLAP) operations like roll-up work on the model. Finally, the mapping from the multiversion MD model to a relational schema is given along with OLAP operations in standard SQL.


Sign in / Sign up

Export Citation Format

Share Document