scholarly journals Method for Creating Collections with Embedded Documents for Document-oriented Databases Taking into Account Executable Queries

2020 ◽  
Vol 19 (4) ◽  
pp. 829-854
Author(s):  
Yulia Shichkina ◽  
Van Muon Ha

In the recent decades, NoSQL databases have become more popular day by day. And increasingly, developers and database administrators, for whatever reason, have to solve the problems of database migration from a relational model in the model NoSQL databases like the document-oriented database MongoDB database. This article discusses the approach to this migration data based on set theory. A new formal method of determining the optimal runtime searches aggregate collections with the attached documents NoSQL databases such as the key document. The attributes of the database objects are included in optimizing the number of collections and their structures in search queries. The initial data are object properties (attributes, relationships between attributes) on which information is stored in the database, and query the properties that are most often performed, or the speed of which should be maximal. This article discusses the basic types of connections (1-1, 1-M, M-M), typical of the relational model. The proposed method is the following step of the method of creating a collection without embedded documents. The article also provides a method for determining what methods should be used in the reasonable cases to make work with databases more effectively. At the end, this article shows the results of testing of the proposed method on databases with different initial schemes. Experimental results show that the proposed method helps reduce the execution time of queries can also significantly as well as reduce the amount of memory required to store the data in a new database.

2019 ◽  
pp. 15-28
Author(s):  
Van Muon Ha ◽  
◽  
Yulia A. Shichkina ◽  
Sergey V. Kostichev ◽  
◽  
...  

The work of transforming a database from one format periodically appears in different organizations for various reasons. Today, the mechanism for changing the format of relational databases is well developed. However, with the advent of new types of databases, such as NoSQL, this problem is prevalent due to the radically different ways of data organization at the various databases. This article discusses a formalized method based on set theory, at the choice of the number and composition of collections for a key-value type database. The initial data are the properties of objects, about which information is stored in the database, and the set of queries that are most frequently executed. The considered method can be applied not only when creating a new keyvalue database, but also when transforming an existing one, when moving from relational databases to NoSQL, when consolidating databases.


2021 ◽  
Vol 2021 ◽  
pp. 1-23
Author(s):  
Jiang Wu ◽  
Du Ni ◽  
Zhi Xiao

To process a huge amount of data, computing resources need to be organized in clusters that can be scaled out easily. Still, traditional SQL databases built on the relational data model are difficult to be put to use in such clusters, which has motivated the movement named NoSQL. However, NoSQL databases have their limits by using their own data models. In this paper, the original soft set theory is extended, and a new theory system called n-tier soft set is brought up. We systematically constructed its concepts, definitions, and operations, establishing it as a novel soft set algebra. And some features of this algebra display its natural advantages as a data model which could combine the logicality of the SQL model (also known as the relational model) and the flexibility of NoSQL models. This data model provides a unified and normative perspective logic for organizing and manipulating data, combines metadata (semantic) and data to form a self-described structure, and combines index and data to realize fast locating and correlating.


2010 ◽  
Vol 43 ◽  
pp. 269-273
Author(s):  
Xue Li ◽  
He Wang ◽  
Shu Fen Chen

To solve the problem of difficulty in establishing the mathematical model between process parameters and surface quality in the process of engineering ceramics electro-spark machining, a neural network relational model based on rough set theory is presented. By processing attribute reduction from data sample utilizing rough set theory, defects like bulkiness of neural network structure and difficult convergence etc are aovided when input dimensions is high. A prediction model that a surface roughness varies in accordance with processing parameters in application of well structured neural network rough set is established. Study result shows that utilizing this model can precisely predict surface roughness under the given conditions with little error which proves the reliability of this model.


2019 ◽  
Vol 51 (4) ◽  
pp. 167-179
Author(s):  
Marcin Pietroń

Abstract Databases are a basic component of every GIS system and many geoinformation applications. They also hold a prominent place in the tool kit of any cartographer. Solutions based on the relational model have been the standard for a long time, but there is a new increasingly popular technological trend – solutions based on the NoSQL database which have many advantages in the context of processing of large data sets. This paper compares the performance of selected spatial relational and NoSQL databases executing queries with selected spatial operators. It has been hypothesised that a non-relational solution will prove to be more effective, which was confirmed by the results of the study. The same spatial data set was loaded into PostGIS and MongoDB databases, which ensured standardisation of data for comparison purposes. Then, SQL queries and JavaScript commands were used to perform specific spatial analyses. The parameters necessary to compare the performance were measured at the same time. The study’s results have revealed which approach is faster and utilises less computer resources. However, it is difficult to clearly identify which technology is better because of a number of other factors which have to be considered when choosing the right tool.


Author(s):  
Cyril Pshenichny

The theory of multitudes pretends to be an alternative to virtually all existing versions of the set theory and claims to better handle the knowledge about changing and evolving world. Then, by analogy, one may expect an original logical system based on the theory of multitudes, and within this logic, an authentic calculus. This chapter presents such calculus. Moreover, a new mathematical methodology can be developed on top of it, which together with the underlying logic, should clearly separate qualitative and quantitative, static and dynamic concerns and offer a formal method to proceed from representation of expert knowledge to modeling the world this knowledge is about.


Author(s):  
TRU H. CAO ◽  
HOA NGUYEN

Fuzzy set theory and probability theory are complementary for soft computing, in particular object-oriented systems with imprecise and uncertain object properties. However, current fuzzy object-oriented data models are mainly based on fuzzy set theory or possibility theory, and lack of a rigorous algebra for querying and managing uncertain and fuzzy object bases. In this paper, we develop an object base model that incorporates both fuzzy set values and probability degrees to handle imprecision and uncertainty. A probabilistic interpretation of relations on fuzzy sets is introduced as a formal basis to coherently unify the two types of measures into a common framework. The model accommodates both class attributes, representing declarative object properties, and class methods, representing procedural object properties. Two levels of property uncertainty are taken into account, one of which is value uncertainty of a definite property and the other is applicability uncertainty of the property itself. The syntax and semantics of the selection and other main data operations on the proposed object base model are formally defined as a full-fledged algebra.


2017 ◽  
Vol 6 (2) ◽  
pp. 1-17 ◽  
Author(s):  
Suparna Dasgupta ◽  
Soumyabrata Saha ◽  
Suman Kumar Das

This article describes how as day-to-day Android users are increasing, the Internet has become the type of environment preferred by attackers to inject malicious packages. This is content with the intention of gathering critical information, spying on user details, credentials, call logs, contact details, and tracking user location. Regrettably it is very hard to detect malware even with antivirus software/packages. In addition, this type of attack is increasing day by day. In this article the authors have chosen a Supervised Learning Classification Tree-based algorithm to detect malware on the data set. Comparison amongst all the classifiers on the basis of accuracy and execution time are used to build the classifier model which has the highest executed detections.


1991 ◽  
Vol 02 (02) ◽  
pp. 101-131 ◽  
Author(s):  
THANH TUNG NGUYEN

The paper gives a self-contained account of a calculus of relations from basic operations through the treatment of recursive relation equations. This calculus serves as an algebraic apparatus for defining the denotational semantics of Dijkstra’s nondeterministic sequential programming language. Nondeterministic programs are modeled by binary relations, objects of an algebraic structure founded upon the operations “union”, “left restriction”, “demonic composition”, “demonic union”, and the ordering “restriction of”. Recursion and iteration are interpreted as fixed points of continuous relationals. Developed in the framework of set theory, this calculus may be regarded as a systematic generalization of the functional style.


2021 ◽  
Author(s):  
Feroz Alam

As a part of achieving specific targets, business decision making involves processing and analyzing large volumes of data that leads to growing enterprise databases day by day. Considering the size and complexity of the databases used in today’s enterprises, it is a major challenge for enterprises to re-engineering their applications that can handle large amounts of data. Compared to traditional relational databases, non-relational NoSQL databases are better suited for dynamic provisioning, horizontal scaling, significant performance, distributed architecture and developer agility benefits. Based on the concept of Object Relational Mapping (ORM) and traditional ETL data migration technique this thesis proposes a methodology for migrating data from RDBMS to NoSQL. The performance of the proposed solution is evaluated through a comparative analysis of RDBMS and NoSQL implementations based on query performance evaluation, query structure and developmental agility.


Sign in / Sign up

Export Citation Format

Share Document