Experience in developing a web application for relational databases modeling with a forward engineering function for training students of technical specialties

Author(s):  
M. V. Smirnov ◽  
V. M. Polenok

The article actualizes the need to develop software for modeling relational databases for use in the process of teaching students of technical specialties in disciplines related to databases.The problem is considered from the point of view of assessing modern software used in the process of teaching students database design skills. Based on the shortcomings identified during the software review, a number of requirements for the actual software were determined. Formed key requirements are mobility, accessibility, versatility and openness of the development platform.The article describes the process of solving key problems that arose during the implementation of a project to develop a web application for modeling relational databases in accordance with the generated requirements. The practical implementation of the following functions is sequentially considered: creation of a logical relational data model, creation of a physical data model, direct engineering into relational database software. The main technological solutions used in the development of a web application to ensure the qualities specified in the condition are described.The result of the work is the successful testing of the development results in the process of creating a real web application, both within the framework of laboratory and practical work in the disciplines “Design and administration of databases” and “Data management”, and at the stage of writing graduate works for technical directions of training.

2018 ◽  
Vol 22 (1) ◽  
pp. 53-61
Author(s):  
A. S. Markovskiy ◽  
N. I. Svekolkin

In the context of toughening of the requirements in the field of information security (the conditions of its safety) difficult-to-implement, the increasing number of external destabilizing factors (including the high level of false alarms), the increase of scopes and speed of information changes, and the drawbacks inherent to most databases, the probability of anomalies occurrence in the process of operation (acquisition, processing and storage) of relational databases is high. The article provides detailed description of the method for the construction a formal grammar executed by a SQL query of relational databases. This approach considers formal grammar under study from a mathematical point of view, as a model that defines a set of discrete objects in the form of description of the original objects and the rules for constructing new objects from the original and already created ones. Thus, a system of rules for further work is formed, represented in the form of a system of equations. The described method makes it possible to determine mathematical properties of the similarity invariants of the SQL query of relational databases intended for the collection, storage and analysis of statistical data, such as reference data of the operation of software and hardware, various statistical data about population, .production etc. The results of the testing of the demonstration prototype of the anomaly detection system, implemented on the basis of the proposed method, obtained in the course of the experimental implementation are presented in comparison with some existing and applied security systems. The solution proposed in the article is effective, simple and universal for the majority of currently used relational databases, In addition, it has a low cost of financial expenses in case of practical implementation.


2018 ◽  
Vol 15 (3) ◽  
pp. 821-843
Author(s):  
Jovana Vidakovic ◽  
Sonja Ristic ◽  
Slavica Kordic ◽  
Ivan Lukovic

A database management system (DBMS) is based on a data model whose concepts are used to express a database schema. Each data model has a specific set of integrity constraint types. There are integrity constraint types, such as key constraint, unique constraint and foreign key constraint that are supported by most DBMSs. Other, more complex constraint types are difficult to express and enforce and are mostly completely disregarded by actual DBMSs. The users have to manage those using custom procedures or triggers. eXtended Markup Language (XML) has become the universal format for representing and exchanging data. Very often XML data are generated from relational databases and exported to a target application or another database. In this context, integrity constraints play the essential role in preserving the original semantics of data. Integrity constraints have been extensively studied in the relational data model. Mechanisms provided by XML schema languages rely on a simple form of constraints that is sufficient neither for expressing semantic constraints commonly found in databases nor for expressing more complex constraints induced by the business rules of the system under study. In this paper we present a classification of constraint types in relational data model, discuss possible declarative mechanisms for their specification and enforcement in the XML data model, and illustrate our approach to the definition and enforcement of complex constraint types in the XML data model on the example of extended tuple constraint type.


Author(s):  
Antonio Badia

Data warehouses (DW) appeared first in industry in the mid 1980s. When their impact on businesses and database practices became clear, a flurry or research took place in academia in the late 1980s and 1990s. However, the concept of DW still remains rooted on its practical origins. This entry describes the basic concepts behind a DW while keeping the discussion at an intuitive level. The entry is meant as an overview to complement more focused and detailed entries, and it assumes only familiarity with the relational data model and relational databases.


Author(s):  
Devendra K. Tayal ◽  
P. C. Saxena

In this paper we discuss an important integrity constraint called multivalued dependency (mvd), which occurs as a result of the first normal form, in the framework of a newly proposed model called fuzzy multivalued relational data model. The fuzzy multivalued relational data model proposed in this paper accommodates a wider class of ambiguities by representing the domain of attributes as a “set of fuzzy subsets”. We show that our model is able to represent multiple types of impreciseness occurring in the real world. To compute the equality of two fuzzy sets/values (which occur as tuple-values), we use the concept of fuzzy functions. So the main objective of this paper is to extend the mvds in context of fuzzy multivalued relational model so that a wider class of impreciseness can be captured. Since the mvds may not exist in isolation, a complete axiomatization for a set of fuzzy functional dependencies (ffds) and mvds in fuzzy multivalued relational schema is provided and the role of fmvds in obtaining the lossless join decomposition is discussed. We also provide a set of sound Inference Rules for the fmvds and derive the conditions for these Inference Rules to be complete. We also derive the conditions for obtaining the lossless join decomposition of a fuzzy multivalued relational schema in the presence of the fmvds. Finally we extend the ABU's Algorithm to find the lossless join decomposition in context of fuzzy multivalued relational databases. We apply all of the concepts of fmvds developed by us to a real world application of “Technical Institute” and demonstrate that how the concepts fit well to capture the multiple types of impreciseness.


Relational databases are holding the maximum amount of data underpinning the web. They show excellent record of convenience and efficiency in repository, optimized query execution, scalability, security and accuracy. Recently graph databases are seen as an good replacement for relational database. When compared to the relational data model, graph data model is more vivid, strong and data expressed in it models relationships among data properly. An important requirement is to increase the vast quantities of data stored in RDB into web. In this situation, migration from relational to graph format is very advantageous. Both databases have advantages and limitations depending on the form of queries. Thus, this paper converts relational to graph database by utilizing the schema in order to develop a dual database system through migration, which merges the capability of both relational db and graph db. The experimental results are provided to demonstrate the practicability of the method and query response time over the target database. The proposed concept is proved by implementing it on MySQL and Neo4j


2019 ◽  
Vol 17 (3) ◽  
pp. 123-134
Author(s):  
E. A. Yatsenko

The article provides an overview of projects, technologies, software products developed to implement the ideas of an object-oriented approach to database design. In the 80s of the 20th century, there were many projects devoted to the idea of OODB, many experts expected that in the near future relational databases would be crowded out with objectoriented ones. Despite the impressive number of projects conducted by both teams of scientists and commercial companies focused on practical implementation, there was no clear formulation of an object-oriented data model, each team presented its own vision of applying object-oriented concepts to database design. The absence of a universal data model, with a well-developed mathematical apparatus (as in the case of relational databases), is still the main problem in the distribution of an OODBMS. However, the use of relational DBMS raises a lot of problems that are most acutely felt in areas such as computer-aided design, computer-aided production, knowledge-based systems, and others. OODB allow to combine the program code and data, to avoid differences between the representations of information in the database and the application program, as a result of which modern developers show interest in them. There are a lot of OODBMS, but they cannot compete with the largest storage organization systems.


2011 ◽  
Vol 8 (1) ◽  
pp. 27-40 ◽  
Author(s):  
Srdjan Skrbic ◽  
Milos Rackovic ◽  
Aleksandar Takaci

In this paper we examine the possibilities to extend the relational data model with the mechanisms that can handle imprecise, uncertain and inconsistent attribute values using fuzzy logic and fuzzy sets. We present a fuzzy relational data model which we use for fuzzy knowledge representation in relational databases that guarantees the model in 3rd normal form. We also describe the CASE tool for the fuzzy database model development which is apparently the first implementation of such a CASE tool. In this sense, this paper presents a leap forward towards the specification of a methodology for fuzzy relational database applications development.


2020 ◽  
pp. 041-054
Author(s):  
I.S. Chystiakova ◽  

This paper is dedicated to the data integration problem. In article the task of practical implementation of mappings between description logic and a binary relational data model is discussed. This method was formulated earlier at a theoretical level. A practical technique to test mapping engines using RDF is provided in the current paper. To transform the constructs of the description logic ALC and its main extensions into RDF triplets the OWL 2-to-RDF mappings are used. To convert RDB to RDF graph, the R2R Mapping Language (R2R ML) was chosen. The mappings DL ALC and its main extensions to the RDF triplets are described in the publication. The mapping of the DL axioms into an RDF triplet also is considered in the publication. The main difficulties in describing DL-to-RDF transformations are given in the corresponding section. For each constructor of concepts and roles a corresponding expression in OWL 2 and its mapping into the RDF triplet. A schematic representation of the resulting RDF graph for each mapping is created. The paper also provides an overview of existing methods that relate to the use of RDF when mapping RDB to ontology and vice versa.


Author(s):  
Esko Piirainen ◽  
Eija-Leena Laiho ◽  
Tea von Bonsdorff ◽  
Tapani Lahti

The Finnish Biodiversity Information Facility, FinBIF (https://species.fi), has developed its own taxon database. This allows FinBIF taxon specialists to maintain their own, expert-validated view of Finnish species. The database covers national needs and can be rapidly expanded by our own development team. Furthermore, in the database each taxon is given a globally unique persistent URI identifier (https://www.w3.org/TR/uri-clarification), which refers to the taxon concept, not just to the name. The identifier doesn’t change if the taxon concept doesn’t change. We aim to ensure compatibility with checklists from other countries by linking taxon concepts as Linked Data (https://www.w3.org/wiki/LinkedData) — a work started as a part of the Nordic e-Infrastructure Collaboration (NeIC) DeepDive project (https://neic.no/deepdive). The database is used as a basis for observation/specimen searches, e-Learning and identification tools, and it is browsable by users of the FinBIF portal. The data is accessible to everyone under CC-BY 4.0 license (https://creativecommons.org/licenses/by/4.0) in machine readable formats. The taxon specialists maintain the taxon data using a web application. Currently, there are 60 specialists. All changes made to the data go live every night. The nightly update interval allows the specialists a grace period to make their changes. Allowing the taxon specialists to modify the taxonomy database themselves leads to some challenges. To maintain the integrity of critical data, such as lists of protected species, we have had to limit what the specialists can do. Changes to critical data is carried out by an administrator. The database has special features for linking observations to the taxonomy. These include hidden species aggregates and tools to override how a certain name used in observations is linked to the taxonomy. Misapplied names remain an unresolved problem. The most precise way to record an observation is to use a taxon concept: Most observations are still recorded using plain names, but it is possible for the observer to pick a concept. Also, when data is published in FinBIF from other information systems, the data providers can link their observations to the concepts using the identifiers of concepts. The ability to use taxon concepts as basis of observations means we have to maintain the concepts over time — a task that may become arduous in the future (Fig. 1). As it stands now, the FinBIF taxon data model — including adjacent classes such as publication, person, image, and endangerment assessments — consists of 260 properties. If the data model were stored in a normalized relational database, there would be approximately 56 tables, which could be difficult to maintain. Keeping track of a complete history of data is difficult in relational databases. Alternatively, we could use document storage to store taxon data. However, there are some difficulties associated with document storages: (1) much work is required to implement a system that does small atomic update operations; (2) batch updates modifying multiple documents usually require writing a script; and (3) they are not ideal for doing searches. We use a document storage for observation data, however, because they are well suited for storing large quantities of complex records. In FinBIF, we have decided to use a triplestore for all small datasets, such as taxon data. More specifically, the data is stored according to the RDF specification (https://www.w3.org/RDF). An RDF Schema defines the allowed properties for each class. Our triplestore implementation is an Oracle relational database with two tables (resource and statement), which gives us the ability to do SQL queries and updates. Doing small atomic updates is easy as only a small subset of the triplets can be updated instead of the entire data entity. Maintaining a complete record of history comes without much effort, as it can be done on an individual triplet level. For performance-critical queries, the taxon data is loaded into an Elasticsearch (https://www.elastic.co) search engine.


Author(s):  
I.I. Kovtun ◽  
G.S. Romanenko

The problems of analysis and planning of diversified production, including the lack of effective methods for search, systematization and primary processing of initial data for mathematical programming methods used in the process of its optimization are considered. Methodology IDEF0 is proposed as a data collection system. For the purpose of practical implementation of the proposed solution, the conceptual apparatus of this methodology is put in accordance with the formal apparatus of these methods and, for its effective realization by computer, relational data model as metamodel is used.


Sign in / Sign up

Export Citation Format

Share Document