scholarly journals A framework for monitoring multiple databases in industries using OPC UA

Author(s):  
Selvine G. Mathias ◽  
Sebastian Schmied ◽  
Daniel Grossmann

AbstractDatabase management and monitoring is an inseparable part of any industry. A uniform scheme of monitoring relational databases without explicit user access to database servers is not much explored outside the database environment. In this paper, we present an information distribution scheme related to databases using Open Platform Communication Unified Architecture (OPC UA) servers to clients when multiple databases are involved in a factory. The aim is for external, but relevant clients, to be able to monitor this information mesh independent of explicit access to user schemas. A methodology to dispense data from, as well as check changes in databases using SQL queries and events is outlined and implemented using OPC UA servers. The structure can be used as a remote viewing application for multiple databases in one address space of an OPC UA server.

2018 ◽  
Vol 8 (3) ◽  
pp. 63-80
Author(s):  
Samah Bouamama

This article describes how due to the monstrous evolution of the technology and the enormous increase in data, it becomes difficult to work with traditional database management tools; relational databases quickly reach their limits and adding servers does not increase performance. As a result of this problem, new technologies have emerged, such as NoSQL databases, which radically change the architecture of the databases that the authors are used to seeing, thus increasing the performance and availability of services. As these technologies are relatively new, standard or formal migration processes do not yet exist, the authors thought it useful to propose a migration approach from a relational database to a database-oriented columns type HBase and Cassandra.


Big data is traditionally associated with distributed systems and this is understandable given that the volume dimension of Big Data appears to be best accommodated by the continuous addition of resources over a distributed network rather than the continuous upgrade of a central storage resource. Based on this implementation context, non- distributed relational database models are considered volume-inefficient and a departure from their usage contemplated by the database community. Distributed systems depend on data partitioning to determine chunks of related data and where in storage they can be accommodated. In existing Database Management Systems (DBMS), data partitioning is automated which in the opinion of this paper does not give the best results since partitioning is an NP-hard problem in terms of algorithmic time complexity. The NP-hardness is shown to be reduced by a partitioning strategy that relies on the discretion of the programmer which is more effective and flexible though requires extra coding effort. NP-hard problems are solved more effectively by a combination of discretion rather than full automation. In this paper, the partitioning process is reviewed and a programmer-based partitioning strategy implemented for an application with a relational DBMS backend. By doing this, the relational DBMS is made adaptive in the volume dimension of big data. The ACID properties (atomicity, consistency, isolation, and durability) of the relational database model which constitutes a major attraction especially for applications that process transactions is thus harnessed. On a more general note, the results of this research suggest that databases can be made adaptive in the areas of their weaknesses as a one-size-fits- all database management system may no longer be feasible.


2018 ◽  
Vol 3 (5) ◽  
pp. 71-75
Author(s):  
Mária Princz

The database management, using relational databases, is part of curriculum in the Hungarian high schools. The aim of this paper is to present how we can show for students the challenges facing data processing, data retrieval, beyond the relational database management taught in high school.


2001 ◽  
Vol 40 (03) ◽  
pp. 225-228 ◽  
Author(s):  
K. C. O’Kane

Abstract:An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.


Author(s):  
Iryna Kanarskaya

The paper is devoted to the research of algorithms implementing intersection, union and difference in tables and multitables. The subject of the work is relevant, since despite the importance and applicability of set-theoretical operations in relational databases, for some reason, the attention of researchers was focused on optimizing other table operations, first of all, the join. Meanwhile, the optimal implementation of set-theoretical operations will lead to a faster execution of the query, which containing at least one of set-theoretical operations, and will significantly reduce the time of processing information in the database management systems. For each set-theoretical operation algorithms that implement them on tables, in which strings are not repeat, and on multi-tables, in which the strings can be repeated, are considered. After that the modifications of the basic algorithms, that we found, which allow to significantly reduce the number of computations are considered. As an average case, we understand the most general case in which the domain of each attribute of the table schema is fixed and known above, and the distribution of values for each attribute in each table is uniform. For each of the six cases (three table operations and three multi-table operations), the fastest algorithms by this criterion were found. For all 6 algorithms considered on the tables (basic and fastest modifications of the basic ones) we found exact complexity on average. The found formulas defining the complexity of the proposed algorithms do not contain O-asymptotics. For the experimental confirmation of the results we developed the software system, which, for tables with given parameters, finds the actual number of computations performed for each of the proposed algorithms. The experiments carried out confirmed the theoretical estimates found for the tables and identified the fastest algorithms for the multitables. The results of the work can be used both in relational databases theory and in practice in queries optimization and to reduce the processing time in database management systems.


2002 ◽  
pp. 293-321 ◽  
Author(s):  
Jose F. Aladana Montes ◽  
Mariemma I. Yague del Valle ◽  
Antonio C. Gomez Lora

Issues related to integrity in databases and distributed databases have been introduced in previous chapters. Therefore, the integrity problem in databases and how it can be managed in several data models (relational, active, temporal, geographical, and object-relational databases) are well known to the reader. The focus of this chapter is on introducing a new paradigm: The Web as the database, and its implications regarding integrity, i.e., the progressive adaptation of database techniques to Web usage. We consider that this will be done in a quite similar way to the evolution from integrated file management systems to database management systems.


Author(s):  
Ibrahim Dweib ◽  
Joan Lu

This chapter presents the state of the art approaches for storing and retrieving the XML documents from relational databases. Approaches are classified into schema-based mapping and schemaless-based mapping. It also discusses the solutions which are included in Database Management Systems such as SQL Server, Oracle, and DB2. The discussion addresses the issues of: rebuilding XML from RDBMS approaches, comparison of mapping approaches, and their advantages and disadvantages. The chapter concludes with the issues addressed.


2021 ◽  
pp. 1-12
Author(s):  
Rachid Mama ◽  
Mustapha Machkour

 Nowadays several works have been proposed that allow users to perform fuzzy queries on relational databases. But most of these systems based on an additional software layer to translate a fuzzy query and a supplementary layer of a classic database management system (DBMS) to evaluate fuzzy predicates, which induces an important overhead. They are not also easy to implement by a non-expert user. Here we have proposed a simple and intelligent approach to extend the SQL language to allow us to write flexible conditions in our queries without the need for translation. The main idea is to use a view to manipulate the satisfaction degrees related to user-defined fuzzy predicates, instead of calculating them at runtime employing user functions embedded in the query. Consequently, the response time of executing a fuzzy query statement will be reduced. This approach allows us to easily integrate most fuzzy request characters such as fuzzy modifiers, fuzzy quantifiers, fuzzy joins, etc. Moreover, we present a user-friendly interface to make it easy to use fuzzy linguistic values in all clauses of a select statement. The main contribution of this paper is to accelerate the execution of fuzzy query statements.


Author(s):  
Дмитро Тереник ◽  
Георгій Кучук Анатолійович

Nowadays, due to the rapid development of social networks and the blogger culture, there is a tendency to use affiliate systems to promote their product. The Affiliate Reporting Service is a service offered to customers who want to analyze the affiliate systems' performance data. These systems are used by business executives and business owners to analyze ecommerce data and convert it into profit/expense data to adjust their business path further. This type of service includes data storage for all affiliates, data archive management, conversion of advertising campaigns, trend tracking, and more. These systems are based on large data sets that need to be stored correctly and safely stored and processed using database management systems. There are two major direction: SQL and NoSQL, relational and non-relational databases. The differences between them are how they are designed, what types of data they support, how they store information, how they support information security. A rigid relational database schema helps maintain the security and integrity of data when stored and modified. The lack of a rigid database schema and the need to change the entire structure of the table with a minimal change in the storage concept, make it easier to work with non-relational databases and subsequently support them, but it also has its disadvantages. It is important to understand that the tasks are different and the methods for solving them are also different; Choosing a database and database management system is a complex multi-parameter task and is one of the most important steps in developing such applications. Properly selected database will reduce the monetary and time costs associated with the development of the software, as well as facilitate system support in the future. The purpose of the article is to compare relational and non-relational databases by different metrics used in Affiliate Reporting Systems Design. In particular, a performance analysis was conducted on the performance of various operations, on the basis of which conclusions were drawn about the use of a particular database.


Author(s):  
Christian Bizer ◽  
Andreas Schultz

The SPARQL Query Language for RDF and the SPARQL Protocol for RDF are implemented by a growing number of storage systems and are used within enterprise and open Web settings. As SPARQL is taken up by the community, there is a growing need for benchmarks to compare the performance of storage systems that expose SPARQL endpoints via the SPARQL protocol. Such systems include native RDF stores as well as systems that rewrite SPARQL queries to SQL queries against non-RDF relational databases. This article introduces the Berlin SPARQL Benchmark (BSBM) for comparing the performance of native RDF stores with the performance of SPARQL-to-SQL rewriters across architectures. The benchmark is built around an e-commerce use case in which a set of products is offered by different vendors and consumers have posted reviews about products. The benchmark query mix emulates the search and navigation pattern of a consumer looking for a product. The article discusses the design of the BSBM benchmark and presents the results of a benchmark experiment comparing the performance of four popular RDF stores (Sesame, Virtuoso, Jena TDB, and Jena SDB) with the performance of two SPARQL-to-SQL rewriters (D2R Server and Virtuoso RDF Views) as well as the performance of two relational database management systems (MySQL and Virtuoso RDBMS).


Sign in / Sign up

Export Citation Format

Share Document