scholarly journals Application and relational database co-refactoring

2014 ◽  
Vol 11 (2) ◽  
pp. 503-524 ◽  
Author(s):  
Ondrej Macek ◽  
Karel Richta

A refactoring of application persistent objects affects not only the source code but the stored data as well. The change is usually processed in two steps: refactoring and data migration, which is ineffective and error prone. We provide a formal model for solution which is capable to migrate database according to a refactoring in the application code. The feasibility of the change and its data-secure processing is addressed as well.

2006 ◽  
Vol 35 (3) ◽  
Author(s):  
Bronius Paradauskas ◽  
Aurimas Laurikaitis

This article discusses the process of enterprise knowledge extraction from relational database and source code of legacy information systems. Problems of legacy systems and main solutions for them are briefly described here. The uses of data reverse engineering and program understanding techniques to automatically infer as much as possible the schema and semantics of a legacy information system is analyzed. Eight step data reverse engineering algorithm for knowledge extraction from legacy systems is provided. A hypothetical example of knowledge extraction from legacy information system is presented.


2021 ◽  
Vol 40 (4) ◽  
pp. 713-727
Author(s):  
F.M. Dahunsi ◽  
A.J. Joseph ◽  
O.A. Sarumi ◽  
O.O. Obe

The evaluation of mobile crowdsourcing activities and reports require a viable and large volume of data. These data are gathered in real-time and from a large number of paid or unpaid volunteers over a period. A high volume of quality data from smartphones or mobile devices is pivotal to the accuracy and validity of the results. Therefore, there is a need for a robust and scalable database structure that can effectively manage and store the large volumes of data collected from various volunteers without compromising the integrity of the data. An in-depth review of various database designs to select the most suitable that will meet the needs of a real-time, robust and large volunteer data handling system is presented. A non-relational database was proposed for the mobile- end database: Google Cloud Firestore specifically due to its support for mobile client implementation, this choice also makes the integration of data from the mobile end-users to the cloud-hosted database relatively easier with all proposed services being part of the Google Cloud Platform; although it is not as popular as some other database services. Separate comparative reviews of the Database Management System (DBMS) performance demonstrated that MongoDB (a non-relational database) performed better when reading large datasets and performing full-text queries, while MySQL (relational) and Cassandra (non-relational) performed much better for data insertion. Google BigQuery was proposed as an appropriate data warehouse solution. It will provide continuity and direct integration with Cloud Firestore and its Application Programming Interface (API) for data migration from Cloud Firestore to BigQuery, and the local server. Also Google BigQuery provides machine learning support for data analytics.


2020 ◽  
pp. 368-374
Author(s):  
P.A. Ivanenko ◽  

Article presents an approach to correctness validation of autotuning optimizational transformations. Autotuner is considered as dynamic discrete system and validation is reduced to verification of characteristic of equivalence by result of representation of initial and optimized program versions in autotuning formal model. In partial cases this validation can be done automatically using source code and rewriting rules technique.


2016 ◽  
Vol 7 (4) ◽  
Author(s):  
Nahrun Hartono ◽  
Ema Utami ◽  
Armadyah Amborowati

Abstract. Information Management System of Cokroaminoto Palopo University (SIMUNCP) is a web application implemented on a Local Area Network (LAN). SIMUNCP uses MySQL as its database. The data is moved from the old database as a source to postgreeSQL as a target by migration. The migration is done because of lack of features on the old database that uses MySQL could not meet the needs of theorganization. Before the migration, the first process is performed to evaluate the existing errors in the old database and the evaluation results are then used as a reference to design the new database. After the data migration is done the next process is measuring the quality of data on the new database. The quality of the data measured is an aspect of accuracy and nonduplicate aspect. Once that is done the next is to do is optimizing the query, Optimized query is a query that exists in the source code of application SIMUNCP.Keywords: Migration, Database, OptimizationAbstrak. Sistem Informasi Manajemen Univeritas Cokroaminoto Palopo (SIMUNCP) merupakan aplikasi web yang diimplementasikan pada jaringan Local Area Network (LAN). SIMUNCP menggunakan MySQL sebagai basis datanya. Migrasi dilakukan dengan memindahkan data dari basis data lama yang menjadi sumber ke basis data postgreSQL sebagai basis data baru menjadi, hal ini dikarenakan minimnya fitur pada basis data lama yang menggunakan MySQL sehingga tidak mampu memenuhi kebutuhan organisasi. Sebelum dilakukan migrasi, yang dilakukan adalah mengevaluasi kesalahankesalahan yang ada pada basis data lama dan hasil evaluasi tersebut kemudian dijadikan acuan untuk merancang basis data baru. setelah migrasi data dilakukan selanjutnya adalah melakukan pengukuran kualitas data pada basis data baru, kualitas data yang diukur adalah aspek akurasi dan aspek nonduplikat, setelah itu dilakukan optimasi query, dimana query-query yang dioptimasi adalah query-query yang ada pada source code aplikasi SIMUNCP.Kata Kunci: Migrasi, Basis data, Optimalisasi.


2018 ◽  
Vol 21 (1) ◽  
pp. 60
Author(s):  
Alza A. Mahmood

   One of the barriers that the developer community face once turning to the newly, highly distributable, schema agnostic and non-relational database, called NoSQL, which is how to migrate their legacy relational database (which is already filled with a large amount of data) into this new class of database management systems. This paper presents a new approach for converting the already filled relational database of any database management system to any type of NoSQL databases in the most optimized data structure form without bothering of specifying the schema of tables and relations between them. In addition, a simplified software as a prototype based on this algorithm is built to show the results of the output for testing the validity of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document