Cover Stories for Key Attributes—Expanded Database Access Control

Author(s):  
Nenad Jukic ◽  
Svetlozar Nestorov ◽  
Susan V. Vrbsky ◽  
Allen Parrish

In this chapter, we extend the multi-level secure (MLS) data model to include nonkey related cover stories so that key attributes can have different values at different security levels. MLS data models require the classification of data and users into multiple security levels. In MLS systems, cover stories allow information provided to users at lower security levels to differ from information provided to users at higher security levels. Previous versions of the MLS model did not permit cover stories for key attributes because the key is used to relate the various cover stories for a particular entity. We present the necessary model changes and modifications to the relational algebra, which are required to implement cover stories for keys. We demonstrate the improvements made by these changes, illustrate the increased expressiveness of the model, and determine the soundness of a database, based on the described concepts.

Author(s):  
Muneer Ahmad ◽  
Noor Zaman ◽  
Low Tang Jung ◽  
Fausto Pedro García Márquez

Access control to multi level security documents is very important and challenging issue. Millions of organizations around the globe intend to apply security levels over their confidential documents to protect from unauthorized use. Some numbered access control approaches have been proposed and an optimal solution is the need of the time. This chapter presents an overview of a robust software engineering approach for access control to multi-level security documents. The access control system incorporates stages including data refinement, text comprehension, and understanding of multi-stage protection and application levels. It will scan the document, tag the sections of certain text, understand the meaning of various levels, group-up the text using bottom-up approach, and then classify the levels as per protection norms (set as organization wise) defined. This approach will be very helpful for multi-level protection of precious information. Only authorized users would be able to access the information relevant to them as defined by the authorities.


Author(s):  
Parisa Ghodous ◽  
Denis Vandorpe

Abstract The integration in computer integrated manufacturing systems (CIMs) plays a significant role in improving the quality and productivity. To achieve this objective, a uniform product and process representation and an effective, comprehensive and reliable data exchange mechanism are required. Recent works on product data integration have led to STEP (international Standard for Exchange of Product data models). In this paper, we define a model which integrates the STEP product data models with process data models. The idea of classification of STEP product data models is used to classify the process data models. Examples from mechanical industries are included to demonstrate the features of this model.


2021 ◽  
Author(s):  
Chiara Forresi ◽  
Enrico Gallinucci ◽  
Matteo Golfarelli ◽  
Hamdi Ben Hamadou

AbstractThe success of NoSQL DBMSs has pushed the adoption of polyglot storage systems that take advantage of the best characteristics of different technologies and data models. While operational applications take great benefit from this choice, analytical applications suffer the absence of schema consistency, not only between different DBMSs but within a single NoSQL system as well. In this context, the discipline of data science is steering analysts away from traditional data warehousing and toward a more flexible and lightweight approach to data analysis. The idea is to perform OLAP analyses in a pay-as-you-go manner across heterogeneous schemas and data models, where the integration is progressively carried out by the user as the available data is explored. In this paper, we propose an approach to support data analysis within a high-variety multistore, with heterogeneous schemas and overlapping records. Our approach supports relational, document, wide-column, and key-value data models by automatically handling both data model and schema heterogeneity through a dataspace layer on top of the underlying DBMSs. The expressiveness we enable corresponds to GPSJ queries, which are the most common class of queries in OLAP applications. We rely on nested relational algebra to define a cross-database execution plan. The system has been prototyped on Apache Spark.


2015 ◽  
Vol 781 ◽  
pp. 579-582
Author(s):  
Suketa Sala ◽  
Pichitpong Soontornpipit

This study designs and develops two data models in order to support the classification of DRG for sub-acute and non-cute inpatient (SNAP) as an appropriate tool for payment and to persuade the better of rehabilitation services. The systems analysis and database design techniques have been used. The models are designed based on two different approaches, firstly minimum effects to a current service systems and secondly coverage of both outpatient and inpatient for SNAP service. Two data models, A and B, can present as alternative database sets to classify DRG for SNAP to get a better reimbursement.


2021 ◽  
pp. 1-25
Author(s):  
Yu-Chin Hsu ◽  
Ji-Liang Shiu

Under a Mundlak-type correlated random effect (CRE) specification, we first show that the average likelihood of a parametric nonlinear panel data model is the convolution of the conditional distribution of the model and the distribution of the unobserved heterogeneity. Hence, the distribution of the unobserved heterogeneity can be recovered by means of a Fourier transformation without imposing a distributional assumption on the CRE specification. We subsequently construct a semiparametric family of average likelihood functions of observables by combining the conditional distribution of the model and the recovered distribution of the unobserved heterogeneity, and show that the parameters in the nonlinear panel data model and in the CRE specification are identifiable. Based on the identification result, we propose a sieve maximum likelihood estimator. Compared with the conventional parametric CRE approaches, the advantage of our method is that it is not subject to misspecification on the distribution of the CRE. Furthermore, we show that the average partial effects are identifiable and extend our results to dynamic nonlinear panel data models.


2021 ◽  
Vol 18 (2) ◽  
pp. 110-135
Author(s):  
Xiang Yu ◽  
Zhangxiang Shu ◽  
Qiang Li ◽  
Jun Huang

2021 ◽  
Author(s):  
Matthias Held ◽  
Grit Laudel ◽  
Jochen Gläser

AbstractIn this paper we utilize an opportunity to construct ground truths for topics in the field of atomic, molecular and optical physics. Our research questions in this paper focus on (i) how to construct a ground truth for topics and (ii) the suitability of common algorithms applied to bibliometric networks to reconstruct these topics. We use the ground truths to test two data models (direct citation and bibliographic coupling) with two algorithms (the Leiden algorithm and the Infomap algorithm). Our results are discomforting: none of the four combinations leads to a consistent reconstruction of the ground truths. No combination of data model and algorithm simultaneously reconstructs all micro-level topics at any resolution level. Meso-level topics are not reconstructed at all. This suggests (a) that we are currently unable to predict which combination of data model, algorithm and parameter setting will adequately reconstruct which (types of) topics, and (b) that a combination of several data models, algorithms and parameter settings appears to be necessary to reconstruct all or most topics in a set of papers.


Sign in / Sign up

Export Citation Format

Share Document