A New Approach to Generate Hospital Data Warehouse Schema

Author(s):  
Nouha Arfaoui ◽  
Jalel Akaichi

The healthcare industry generates huge amount of data underused for decision making needs because of the absence of specific design mastered by healthcare actors and the lack of collaboration and information exchange between the institutions. In this work, a new approach is proposed to design the schema of a Hospital Data Warehouse (HDW). It starts by generating the schemas of the Hospital Data Mart (HDM) one for each department taking into consideration the requirements of the healthcare staffs and the existing data sources. Then, it merges them to build the schema of HDW. The bottom-up approach is suitable because the healthcare departments are separately. To merge the schemas, a new schema integration methodology is used. It starts by extracting the similar elements of the schemas and the conflicts and presents them as mapping rules. Then, it transforms the rules into queries and applies them to merge the schemas.

Author(s):  
Nouha Arfaoui ◽  
Jalel Akaichi

The healthcare industry generates huge amount of data underused for decision making needs because of the absence of specific design mastered by healthcare actors and the lack of collaboration and information exchange between the institutions. In this work, a new approach is proposed to design the schema of a Hospital Data Warehouse (HDW). It starts by generating the schemas of the Hospital Data Mart (HDM) one for each department taking into consideration the requirements of the healthcare staffs and the existing data sources. Then, it merges them to build the schema of HDW. The bottom-up approach is suitable because the healthcare departments are separately. To merge the schemas, a new schema integration methodology is used. It starts by extracting the similar elements of the schemas and the conflicts and presents them as mapping rules. Then, it transforms the rules into queries and applies them to merge the schemas.


2014 ◽  
Vol 668-669 ◽  
pp. 1374-1377 ◽  
Author(s):  
Wei Jun Wen

ETL refers to the process of data extracting, transformation and loading and is deemed as a critical step in ensuring the quality, data specification and standardization of marine environmental data. Marine data, due to their complication, field diversity and huge volume, still remain decentralized, polyphyletic and isomerous with different semantics and hence far from being able to provide effective data sources for decision making. ETL enables the construction of marine environmental data warehouse in the form of cleaning, transformation, integration, loading and periodic updating of basic marine data warehouse. The paper presents a research on rules for cleaning, transformation and integration of marine data, based on which original ETL system of marine environmental data warehouse is so designed and developed. The system further guarantees data quality and correctness in analysis and decision-making based on marine environmental data in the future.


2017 ◽  
Author(s):  
Andysah Putera Utama Siahaan

Knowledge discovery is the process of adding knowledge from a large amount of data. The quality of knowledge generated from the process of knowledge discovery greatly affects the results of the decisions obtained. Existing data must be qualified and tested to ensure knowledge discovery processes can produce knowledge or information that is useful and feasible. It deals with strategic decision-making for an organization. Combining multiple operational databases and external data create the data warehouse. This treatment is very vulnerable to incomplete, inconsistent, and noisy data. Data mining provides a mechanism to clear this deficiency before finally stored in the data warehouse. This research tries to give technique to improve the quality of information in the data warehouse.


2020 ◽  
Author(s):  
Fergus Bolger ◽  
Gene Rowe ◽  
Ian Belton ◽  
Megan M Crawford ◽  
Iain Hamlin ◽  
...  

Groups provide several benefits over individuals for judgment and decision making, but they suffer from problems too. Structured-group techniques, like Delphi, use strictly controlled information exchange between individuals to retain positive aspects of group interaction, while ameliorating negative. These methods regularly use ‘nominal’ groups that interact in a remote, distributed, and often anonymous manner, thus lending themselves to internet applications, with a consequent recent increase in popularity. However, evidence for the utility of the techniques is scant, major reasons for which being difficulties maintaining experimental control and logistical problems in recruiting sufficient empirical ‘groups’ to produce statistically meaningful results. As a solution, we present the Simulated Group Response Paradigm, where individual responses are first elicited in a pre-study – or created by the experimenter – then subsequently fed back to highly-controlled simulated groups. This paradigm facilitates investigation of factors leading to virtuous opinion change in groups, and subsequent development of structured-group techniques.


Author(s):  
Cécile Favre ◽  
Fadila Bentayeb ◽  
Omar Boussaid

A data warehouse allows the integration of heterogeneous data sources for analysis purposes. One of the key points for the success of the data warehousing process is the design of the model according to the available data sources and the analysis needs (Nabli, Soussi, Feki, Ben-Abdallah & Gargouri, 2005). However, as the business environment evolves, several changes in the content and structure of the underlying data sources may occur. In addition to these changes, analysis needs may also evolve, requiring an adaptation to the existing data warehouse’s model. In this chapter, we provide an overall view of the state of the art in data warehouse model evolution. We present a set of comparison criteria and compare the various works. Moreover, we discuss the future trends in data warehouse model evolution.


Author(s):  
Ardhian Agung Yulianto

While data warehouse is designed to support the decision-making function, the most time-consuming part is Extract Transform Load (ETL) process. Case in Academic Data Warehouse, when data source came from faculty’s distributed database, although having a typical database but become not easier to integrate. This paper presents the ETL detail process following Data Flow Thread in data staging area for identifying, profiling, the content analyzing including all tables in data sources, and then cleaning, confirming dimension and data delivery to the data warehouse. Those processes are running gradually from each distributed database data sources until it merged. Dimension table and fact table are generated in a multidimensional model. ETL tool is Pentaho Data Integration 6.1. ETL testing is done by comparing data source and data target and DW testing conducted by comparing the data analysis between SQL query and Saiku Analytics plugin in Pentaho Business Analytic Server.


2014 ◽  
Vol 16 (2) ◽  
pp. 138-143 ◽  
Author(s):  
Abubakar Ado ◽  
◽  
Ahmed Aliyu ◽  
Saifullahi Aminu Bello ◽  
Abdulra’uf Garba Sharifai ◽  
...  

2020 ◽  
Vol 13 (6) ◽  
pp. 419-431
Author(s):  
Irya Wisnubhadra ◽  
◽  
Safiza Baharin ◽  
Nanna Herman ◽  
◽  
...  

Business Intelligence (BI) technology with Extract, Transform, and Loading process, Data Warehouse, and OLAP have demonstrated the ability of information and knowledge generation for supporting decision making. In the last decade, the advancement of the Web 2.0 technology is improving the accessibility of web of data across the cloud. Linked Open Data, Linked Open Statistical Data, and Open Government Data is increasing massively, creating a more significant computer-recognizable data available for sharing. In agricultural production analytics, data resources with high availability and accessibility is a primary requirement. However, today’s data accessibility for production analytics is limited in the 2 or 3-stars open data format and rarely has attributes for spatiotemporal analytics. The new data warehouse concept has a new approach to combine the openness of data resources with mobility or spatiotemporal data in nature. This new approach could help the decision-makers to use external data to make a crucial decision more intuitive and flexible. This paper proposed the development of a spatiotemporal data warehouse with an integration process using service-oriented architecture and open data sources. The data sources are originating from the Village and Rural Area Information System (SIDeKa) that capture the agricultural production transaction in a daily manner. This paper also describes the way to spatiotemporal analytics for agricultural production using a new spatiotemporal data warehouse approach. The experiment results, by executing six relevant spatiotemporal query samples on DW with fact table contains 324096 tuples with temporal integer/float for each tuple, 4495 tuples of field dimension with geographic data as polygons, 80 tuples of village dimension, dozens of tuples of the district, subdistrict, province dimensions. The DW time dimension contains 3653 tuples representing a date for ten years, proved that this new approach has a convenient, simple model, and expressive performance for supporting executive to make decisions on agriculture production analytics based on spatiotemporal data. This research also underlines the prospects for scaling and nurturing the spatiotemporal data warehouse initiative.


Author(s):  
Sami Faïz

Geographic data are characterized by huge volumes, lack of standards, multiplicity of data sources, multi-scale requirements, and variability in time. These characteristics make geographic information complex and uncertain. At the same time, the important growth of the quantity of data manipulated and the necessity to make rapid decisions imposed the appearance and the great progress of new tools like data warehousing techniques. Data warehouse is usually defined as a subject-oriented, integrated, time-variant, and non-volatile collection of data in support of decision-making processes.


2020 ◽  
Author(s):  
Pranav C

UNSTRUCTURED The word blockchain elicits thoughts of cryptocurrency much of the time, which does disservice to this disruptive new technology. Agreed, bitcoin launched in 2011 was the first large scale implementation of blockchain technology. Also, Bitcoin’s success has triggered the establishment of nearly 1000 new cryptocurrencies. This again lead to the delusion that the only application of blockchain technology is for the creation of cryptocurrency. However, the blockchain technology is capable of a lot more than just cryptocurrency creation and may support such things as transactions that require personal identification, peer review, elections and other types of democratic decision-making and audit trails. Blockchain exists with real world implementations beyond cryptocurrencies and these solutions deliver powerful benefits to healthcare organizations, bankers, retailers and consumers among others. One of the areas where blockchain technology can be used effectively is healthcare industry. Proper application of this technology in healthcare will not only save billions of money but also will contribute to the growth in research. This review paper briefly defines blockchain and deals in detail the applications of blockchain in various areas particularly in healthcare industry.


Sign in / Sign up

Export Citation Format

Share Document