scholarly journals Extract transform load (ETL) process in distributed database academic data warehouse

Author(s):  
Ardhian Agung Yulianto

While a data warehouse is designed to support the decision-making function, the most time-consuming partis the Extract Transform Load (ETL) process. Case in Academic Data Warehouse, when data source came from thefaculty’s distributed database, although having a typical database but become not easier to integrate. This paperpresents how to an ETL process in distributed database academic data warehouse. Following Data Flow Threadprocess in the data staging area, a deep analysis performed for identifying all tables in each data sources, includingcontent profiling. Then the cleaning, confirming, and data delivery steps pour the different data source into the datawarehouse (DW). Since DW development using bottom-up Kimball’s multidimensional approach, we found the threetypes of extraction activities from data source table: merge, merge-union, and union. Result for cleaning andconforming step set by creating conform dimension on data source analysis, refinement, and hierarchy structure. Thefinal of the ETL step is loading it into integrating dimension and fact tables by a generation of a surrogate key. Thoseprocesses are running gradually from each distributed database data sources until it incorporated. This technicalactivity in distributed database ETL process generally can be adopted widely in other industries which designer musthave advance knowledge to structure and content of data source.

Author(s):  
Ardhian Agung Yulianto

While data warehouse is designed to support the decision-making function, the most time-consuming part is Extract Transform Load (ETL) process. Case in Academic Data Warehouse, when data source came from faculty’s distributed database, although having a typical database but become not easier to integrate. This paper presents the ETL detail process following Data Flow Thread in data staging area for identifying, profiling, the content analyzing including all tables in data sources, and then cleaning, confirming dimension and data delivery to the data warehouse. Those processes are running gradually from each distributed database data sources until it merged. Dimension table and fact table are generated in a multidimensional model. ETL tool is Pentaho Data Integration 6.1. ETL testing is done by comparing data source and data target and DW testing conducted by comparing the data analysis between SQL query and Saiku Analytics plugin in Pentaho Business Analytic Server.


Author(s):  
Ivan Bojicic ◽  
Zoran Marjanovic ◽  
Nina Turajlic ◽  
Marko Petrovic ◽  
Milica Vuckovic ◽  
...  

In order for a data warehouse to be able to adequately fulfill its integrative and historical purpose, its data model must enable the appropriate and consistent representation of the different states of a system. In effect, a DW data model, representing the physical structure of the DW, must be general enough, to be able to consume data from heterogeneous data sources and reconcile the semantic differences of the data source models, and, at the same time, be resilient to the constant changes in the structure of the data sources. One of the main problems related to DW development is the absence of a standardized DW data model. In this paper a comparative analysis of the four most prominent DW data models (namely the relational/normalized model, data vault model, anchor model and dimensional model) will be given. On the basis of the results of [1]a, the new DW data model (the Domain/Mapping model- DMM) which would more adequately fulfill the posed requirements is presented.


Author(s):  
Robert Wrembel

A data warehouse architecture (DWA) has been developed for the purpose of integrating data from multiple heterogeneous, distributed, and autonomous external data sources (EDSs) as well as for providing means for advanced analysis of integrated data. The major components of this architecture include: an external data source (EDS) layer, and extraction-transformation-loading (ETL) layer, a data warehouse (DW) layer, and an on-line analytical processing (OLAP) layer. Methods of designing a DWA, research developments, and most of the commercially available DW technologies tacitly assumed that a DWA is static. In practice, however, a DWA requires changes among others as the result of the evolution of EDSs, changes of the real world represented in a DW, and new user requirements. Changes in the structures of EDSs impact the ETL, DW, and OLAP layers. Since such changes are frequent, developing a technology for handling them automatically or semi-automatically in a DWA is of high practical importance. This chapter discusses challenges in designing, building, and managing a DWA that supports the evolution of structures of EDSs, evolution of an ETL layer, and evolution of a DW. The challenges and their solutions presented here are based on an experience of building a prototype Evolving-ETL and a prototype Multiversion Data Warehouse (MVDW). In details, this chapter presents the following issues: the concept of the MVDW, an approach to querying the MVDW, an approach to handling the evolution of an ETL layer, a technique for sharing data between multiple DW versions, and two index structures for the MVDW.


Epidemiologia ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 315-324
Author(s):  
Juan M. Banda ◽  
Ramya Tekumalla ◽  
Guanyu Wang ◽  
Jingyuan Yu ◽  
Tuo Liu ◽  
...  

As the COVID-19 pandemic continues to spread worldwide, an unprecedented amount of open data is being generated for medical, genetics, and epidemiological research. The unparalleled rate at which many research groups around the world are releasing data and publications on the ongoing pandemic is allowing other scientists to learn from local experiences and data generated on the front lines of the COVID-19 pandemic. However, there is a need to integrate additional data sources that map and measure the role of social dynamics of such a unique worldwide event in biomedical, biological, and epidemiological analyses. For this purpose, we present a large-scale curated dataset of over 1.12 billion tweets, growing daily, related to COVID-19 chatter generated from 1 January 2020 to 27 June 2021 at the time of writing. This data source provides a freely available additional data source for researchers worldwide to conduct a wide and diverse number of research projects, such as epidemiological analyses, emotional and mental responses to social distancing measures, the identification of sources of misinformation, stratified measurement of sentiment towards the pandemic in near real time, among many others.


2021 ◽  
Vol 37 (1) ◽  
pp. 161-169
Author(s):  
Dominik Rozkrut ◽  
Olga Świerkot-Strużewska ◽  
Gemma Van Halderen

Never has there been a more exciting time to be an official statistician. The data revolution is responding to the demands of the CoVID-19 pandemic and a complex sustainable development agenda to improve how data is produced and used, to close data gaps to prevent discrimination, to build capacity and data literacy, to modernize data collection systems and to liberate data to promote transparency and accountability. But can all data be liberated in the production and communication of official statistics? This paper explores the UN Fundamental Principles of Official Statistics in the context of eight new and big data sources. The paper concludes each data source can be used for the production of official statistics in adherence with the Fundamental Principles and argues these data sources should be used if National Statistical Systems are to adhere to the first Fundamental Principle of compiling and making available official statistics that honor citizen’s entitlement to public information.


2021 ◽  
pp. 1-11
Author(s):  
Yanan Huang ◽  
Yuji Miao ◽  
Zhenjing Da

The methods of multi-modal English event detection under a single data source and isomorphic event detection of different English data sources based on transfer learning still need to be improved. In order to improve the efficiency of English and data source time detection, based on the transfer learning algorithm, this paper proposes multi-modal event detection under a single data source and isomorphic event detection based on transfer learning for different data sources. Moreover, by stacking multiple classification models, this paper makes each feature merge with each other, and conducts confrontation training through the difference between the two classifiers to further make the distribution of different source data similar. In addition, in order to verify the algorithm proposed in this paper, a multi-source English event detection data set is collected through a data collection method. Finally, this paper uses the data set to verify the method proposed in this paper and compare it with the current most mainstream transfer learning methods. Through experimental analysis, convergence analysis, visual analysis and parameter evaluation, the effectiveness of the algorithm proposed in this paper is demonstrated.


2014 ◽  
Vol 668-669 ◽  
pp. 1374-1377 ◽  
Author(s):  
Wei Jun Wen

ETL refers to the process of data extracting, transformation and loading and is deemed as a critical step in ensuring the quality, data specification and standardization of marine environmental data. Marine data, due to their complication, field diversity and huge volume, still remain decentralized, polyphyletic and isomerous with different semantics and hence far from being able to provide effective data sources for decision making. ETL enables the construction of marine environmental data warehouse in the form of cleaning, transformation, integration, loading and periodic updating of basic marine data warehouse. The paper presents a research on rules for cleaning, transformation and integration of marine data, based on which original ETL system of marine environmental data warehouse is so designed and developed. The system further guarantees data quality and correctness in analysis and decision-making based on marine environmental data in the future.


2020 ◽  
Vol 14 (3) ◽  
pp. 320-328
Author(s):  
Long Guo ◽  
Lifeng Hua ◽  
Rongfei Jia ◽  
Fei Fang ◽  
Binqiang Zhao ◽  
...  

With the rapid growth of e-commerce in recent years, e-commerce platforms are becoming a primary place for people to find, compare and ultimately purchase products. To improve online shopping experience for consumers and increase sales for sellers, it is important to understand user intent accurately and be notified of its change timely. In this way, the right information could be offered to the right person at the right time. To achieve this goal, we propose a unified deep intent prediction network, named EdgeDIPN, which is deployed at the edge, i.e., mobile device, and able to monitor multiple user intent with different granularity simultaneously in real-time. We propose to train EdgeDIPN with multi-task learning, by which EdgeDIPN can share representations between different tasks for better performance and saving edge resources in the meantime. In particular, we propose a novel task-specific attention mechanism which enables different tasks to pick out the most relevant features from different data sources. To extract the shared representations more effectively, we utilize two kinds of attention mechanisms, where the multi-level attention mechanism tries to identify the important actions within each data source and the inter-view attention mechanism learns the interactions between different data sources. In the experiments conducted on a large-scale industrial dataset, EdgeDIPN significantly outperforms the baseline solutions. Moreover, EdgeDIPN has been deployed in the operational system of Alibaba. Online A/B testing results in several business scenarios reveal the potential of monitoring user intent in real-time. To the best of our knowledge, EdgeDIPN is the first full-fledged real-time user intent understanding center deployed at the edge and serving hundreds of millions of users in a large-scale e-commerce platform.


2016 ◽  
Vol 12 (3) ◽  
pp. 32-50
Author(s):  
Xiufeng Liu ◽  
Nadeem Iftikhar ◽  
Huan Huo ◽  
Per Sieverts Nielsen

In data warehousing, the data from source systems are populated into a central data warehouse (DW) through extraction, transformation and loading (ETL). The standard ETL approach usually uses sequential jobs to process the data with dependencies, such as dimension and fact data. It is a non-trivial task to process the so-called early-/late-arriving data, which arrive out of order. This paper proposes a two-level data staging area method to optimize ETL. The proposed method is an all-in-one solution that supports processing different types of data from operational systems, including early-/late-arriving data, and fast-/slowly-changing data. The introduced additional staging area decouples loading process from data extraction and transformation, which improves ETL flexibility and minimizes intervention to the data warehouse. This paper evaluates the proposed method empirically, which shows that it is more efficient and less intrusive than the standard ETL method.


2021 ◽  
Vol 8 (5) ◽  
pp. 1077
Author(s):  
Joko Purwanto ◽  
Renny Renny

<p class="BodyCxSpFirst">Pemanfaatan teknologi informasi sangat penting bagi rumah sakit, karena berpengaruh pula terhadap kualitas pelayanan kesehatan yang secara manual diubah menjadi digital dengan menggunakan teknologi informasi.Dalam penelitian ini penulis menggunakan metodologi <em>Nine step</em> sebagai acuan dalam merancang suatu <em>data warehouse</em><em>,</em> untuk pemodelan menggunakan skema konstelasi fakta dengan 3 tabel fakta dan 11 tabel dimensi. Perbedaan penelitian ini dengan penelitian sebelumnya terletak pada sumber data yang diekstrak langsung dari <em>database</em> SIMRS yang digunakan rumah sakit, sehingga tidak ada ekstraksi data secara manual.Penelitian ini bertujuan untuk menghasilkan desain data warehouse berbasis Online Analytical Processing (OLAP) sebagai sarana penunjang kualitas pelayanan kesehatan rumah sakit. OLAP yang dihasilkan akan berupa desain data warehouse dengan berbagai dimensi yang akan menghasilkan tampilan informasi berupa Chart maupun Grafik sehingga informasinya mudah dibaca dan dipahami oleh berbagai pihak.</p><p class="BodyCxSpFirst"> </p><p class="BodyCxSpFirst"><em><strong>Abtract</strong></em></p><p class="BodyCxSpFirst"><em>The use of information technology is very important for hospitals, because it also affects the quality of health services, which manualy changed to digital using information technology. In this study, the authors used the Nine step methodology as a reference in designing a data warehouse for modeling using a fact constellation schema with 3 fact tables and 11 dimension tables. the different in this study from previous research is that the data source was taken directly from the SIMRS database used by the hospital, so there is no manual data extraction.</em><em>The aim of this research is to be able to produce a Data Warehouse design based on Online Analytical Processing (OLAP) as a means of supporting the quality of hospital health services. The resulting OLAP will be a data warehouse design with various dimensions will produce the displays information in the form of a graph or chart so that the information is easy to read and understand by various parties.</em></p><p class="BodyCxSpLast"><em> </em></p><p class="BodyCxSpFirst"><em><strong><br /></strong></em></p>


Sign in / Sign up

Export Citation Format

Share Document