Using Data Warehouse to Discover the Relation between TW-DRG and Pharmacology

2011 ◽  
Vol 474-476 ◽  
pp. 938-942
Author(s):  
Chih Sheng Chen ◽  
Guan Yu Chen ◽  
Jing Wun Hong ◽  
Ji Rou Jhang ◽  
Jia Yi Liou ◽  
...  

This research explores the relation between TW-DRG and pharmacological information by using the concept of data warehouse as a basis. It is hoped to assist doctors, under the condition that patients’ rights will not be affected, to replace the high-priced pharmaceuticals with the pharmaceuticals which are low-priced yet with the same pharmacological and pharmacodynamic effects, in order to reduce the medication cost in medical institutions and hospitals. From this result, we learn that the differences among doctors’ medication habits can be offered to hospitals and doctors for policy analysis on medication. Also, doctors can make appropriate adjustments in medication acts and find out the replaceable pharmaceuticals so that the pharmaceutical cost can be lowered.

JAMIA Open ◽  
2021 ◽  
Vol 4 (2) ◽  
Author(s):  
Divya Joshi ◽  
Ali Jalali ◽  
Todd Whipple ◽  
Mohamed Rehman ◽  
Luis M Ahumada

Abstract Objective To develop a predictive analytics tool that would help evaluate different scenarios and multiple variables for clearance of surgical patient backlog during the COVID-19 pandemic. Materials and Methods Using data from 27 866 cases (May 1 2018–May 1 2020) stored in the Johns Hopkins All Children’s data warehouse and inputs from 30 operations-based variables, we built mathematical models for (1) time to clear the case backlog (2), utilization of personal protective equipment (PPE), and (3) assessment of overtime needs. Results The tool enabled us to predict desired variables, including number of days to clear the patient backlog, PPE needed, staff/overtime needed, and cost for different backlog reduction scenarios. Conclusions Predictive analytics, machine learning, and multiple variable inputs coupled with nimble scenario-creation and a user-friendly visualization helped us to determine the most effective deployment of operating room personnel. Operating rooms worldwide can use this tool to overcome patient backlog safely.


2017 ◽  
Vol 801 ◽  
pp. 012030 ◽  
Author(s):  
A S Sinaga ◽  
A S Girsang
Keyword(s):  

2021 ◽  
Author(s):  
Yumi Wakabayashi ◽  
Masamitsu Eitoku ◽  
Narufumi Suganuma

Abstract Background Interventional studies are the fundamental method for obtaining answers to clinical question. However, these studies are sometimes difficult to conduct because of insufficient financial or human resources or the rarity of the disease in question. One means of addressing these issues is to conduct a non-interventional observational study using electronic health record (EHR) databases as the data source, although how best to evaluate the suitability of an EHR database when planning a study remains to be clarified. The aim of the present study is to identify and characterize the data sources that have been used for conducting non-interventional observational studies in Japan and propose a flow diagram to help researchers determine the most appropriate EHR database for their study goals. Methods We compiled a list of published articles reporting observational studies conducted in Japan by searching PubMed for relevant articles published in the last 3 years and by searching database providers’ publication lists related to studies using their databases. For each article, we reviewed the abstract and/or full text to obtain information about data source, target disease or therapeutic area, number of patients, and study design (prospective or retrospective). We then characterized the identified EHR databases. Results In Japan, non-interventional observational studies have been mostly conducted using data stored locally at individual medical institutions (713/1463) or collected from several collaborating medical institutions (351/1463). Whereas the studies conducted with large-scale integrated databases (195/1463) were mostly retrospective (68.2%), 27.2% of the single-center studies, 46.2% of the multi-center studies, and 74.4% of the post-marketing surveillance studies, identified in the present study, were conducted prospectively. Conclusions Our analysis revealed that the non-interventional observational studies were conducted using data stored local at individual medical institutions or collected from collaborating medical institutions in Japan. Disease registries, disease databases, and large-scale databases would enable researchers to conduct studies with large sample sizes to provide robust data from which strong inferences could be drawn. Using our flow diagram, researchers planning non-interventional observational studies should consider the strengths and limitations of each available database and choose the most appropriate one for their study goals. Trial registration Not applicable.


Author(s):  
Ladjel Bellatreche ◽  
Mukesh Mohania

Recently, organizations have increasingly emphasized applications in which current and historical data are analyzed and explored comprehensively, identifying useful trends and creating summaries of the data in order to support high-level decision making. Every organization keeps accumulating data from different functional units, so that they can be analyzed (after integration), and important decisions can be made from the analytical results. Conceptually, a data warehouse is extremely simple. As popularized by Inmon (1992), it is a “subject-oriented, integrated, time-invariant, non-updatable collection of data used to support management decision-making processes and business intelligence”. A data warehouse is a repository into which are placed all data relevant to the management of an organization and from which emerge the information and knowledge needed to effectively manage the organization. This management can be done using data-mining techniques, comparisons of historical data, and trend analysis. For such analysis, it is vital that (1) data should be accurate, complete, consistent, well defined, and time-stamped for informational purposes; and (2) data should follow business rules and satisfy integrity constraints. Designing a data warehouse is a lengthy, time-consuming, and iterative process. Due to the interactive nature of a data warehouse application, having fast query response time is a critical performance goal. Therefore, the physical design of a warehouse gets the lion’s part of research done in the data warehousing area. Several techniques have been developed to meet the performance requirement of such an application, including materialized views, indexing techniques, partitioning and parallel processing, and so forth. Next, we briefly outline the architecture of a data warehousing system.


2013 ◽  
Vol 321-324 ◽  
pp. 2543-2550
Author(s):  
Xiao Guo Wang ◽  
Ru Jia

Considering the functional requirements of essential service, value-added service, prediction service and personalized service, which are demanded by users from university, enterprise and government, this paper designed an infrastructure of university information service platform using data warehouse technology. By means of the infomation resource integration method put forward by this paper, the platform realized the subject-oriented, multi-scale service to meet users service requirements and support decisions.


Author(s):  
Srikumar Krishnamoorthy

Acme Inc, a large retailer, explores the use of Data warehouse for addressing their decision support infrastructure Challenges. Acme plans for a pilot study to assess the feasibility and evaluate the business benefits of using Data warehouse. The focus of this case is to ascertain the steps involved in design, development and implementation of a Data warehouse.


Testing is very essential in Data warehouse systems for decision making because the accuracy, validation and correctness of data depends on it. By looking to the characteristics and complexity of iData iwarehouse, iin ithis ipaper, iwe ihave itried ito ishow the scope of automated testing in assuring ibest data iwarehouse isolutions. Firstly, we developed a data set generator for creating synthetic but near to real data; then in isynthesized idata, with ithe help of hand icoded Extraction, Transformation and Loading (ETL) routine, anomalies are classified. For the quality assurance of data for a Data warehouse and to give the idea of how important the iExtraction, iTransformation iand iLoading iis, some very important test cases were identified. After that, to ensure the quality of data, the procedures of automated testing iwere iembedded iin ihand icoded iETL iroutine. Statistical analysis was done and it revealed a big enhancement in the quality of data with the procedures of automated testing. It enhances the fact that automated testing gives promising results in the data warehouse quality. For effective and easy maintenance of distributed data,a novel architecture was proposed. Although the desired result of this research is achieved successfully and the objectives are promising, but still there's a need to validate the results with the real life environment, as this research was done in simulated environment, which may not always give the desired results in real life environment. Hence, the overall potential of the proposed architecture can be seen until it is deployed to manage the real data which is distributed globally.


Sign in / Sign up

Export Citation Format

Share Document