Performance Analysis for ECG Signals Using Data Warehouse Architecture

2020 ◽  
Author(s):  
David R Henry
JAMIA Open ◽  
2021 ◽  
Vol 4 (2) ◽  
Author(s):  
Divya Joshi ◽  
Ali Jalali ◽  
Todd Whipple ◽  
Mohamed Rehman ◽  
Luis M Ahumada

Abstract Objective To develop a predictive analytics tool that would help evaluate different scenarios and multiple variables for clearance of surgical patient backlog during the COVID-19 pandemic. Materials and Methods Using data from 27 866 cases (May 1 2018–May 1 2020) stored in the Johns Hopkins All Children’s data warehouse and inputs from 30 operations-based variables, we built mathematical models for (1) time to clear the case backlog (2), utilization of personal protective equipment (PPE), and (3) assessment of overtime needs. Results The tool enabled us to predict desired variables, including number of days to clear the patient backlog, PPE needed, staff/overtime needed, and cost for different backlog reduction scenarios. Conclusions Predictive analytics, machine learning, and multiple variable inputs coupled with nimble scenario-creation and a user-friendly visualization helped us to determine the most effective deployment of operating room personnel. Operating rooms worldwide can use this tool to overcome patient backlog safely.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1568
Author(s):  
Junmo Kim ◽  
Geunbo Yang ◽  
Juhyeong Kim ◽  
Seungmin Lee ◽  
Ko Keun Kim ◽  
...  

Recently, the interest in biometric authentication based on electrocardiograms (ECGs) has increased. Nevertheless, the ECG signal of a person may vary according to factors such as the emotional or physical state, thus hindering authentication. We propose an adaptive ECG-based authentication method that performs incremental learning to identify ECG signals from a subject under a variety of measurement conditions. An incremental support vector machine (SVM) is adopted for authentication implementing incremental learning. We collected ECG signals from 11 subjects during 10 min over six days and used the data from days 1 to 5 for incremental learning, and those from day 6 for testing. The authentication results show that the proposed system consistently reduces the false acceptance rate from 6.49% to 4.39% and increases the true acceptance rate from 61.32% to 87.61% per single ECG wave after incremental learning using data from the five days. In addition, the authentication results tested using data obtained a day after the latest training show the false acceptance rate being within reliable range (3.5–5.33%) and improvement of the true acceptance rate (70.05–87.61%) over five days.


2011 ◽  
Vol 474-476 ◽  
pp. 938-942
Author(s):  
Chih Sheng Chen ◽  
Guan Yu Chen ◽  
Jing Wun Hong ◽  
Ji Rou Jhang ◽  
Jia Yi Liou ◽  
...  

This research explores the relation between TW-DRG and pharmacological information by using the concept of data warehouse as a basis. It is hoped to assist doctors, under the condition that patients’ rights will not be affected, to replace the high-priced pharmaceuticals with the pharmaceuticals which are low-priced yet with the same pharmacological and pharmacodynamic effects, in order to reduce the medication cost in medical institutions and hospitals. From this result, we learn that the differences among doctors’ medication habits can be offered to hospitals and doctors for policy analysis on medication. Also, doctors can make appropriate adjustments in medication acts and find out the replaceable pharmaceuticals so that the pharmaceutical cost can be lowered.


2018 ◽  
Vol 14 (1) ◽  
pp. 43-50 ◽  
Author(s):  
Anna Fitzpatrick ◽  
Joseph A Stone ◽  
Simon Choppin ◽  
John Kelley

Performance analysis and identifying performance characteristics associated with success are of great importance to players and coaches in any sport. However, while large amounts of data are available within elite tennis, very few players employ an analyst or attempt to exploit the data to enhance their performance; this is partly attributable to the considerable time and complex techniques required to interpret these large datasets. Using data from the 2016 and 2017 French Open tournaments, we tested the agreement between the results of a simple new method for identifying important performance characteristics (the Percentage of matches in which the Winner Outscored the Loser, PWOL) and the results of two standard statistical methods to establish the validity of the simple method. Spearman’s rank-order correlations between the results of the three methods demonstrated excellent agreement, with all methods identifying the same three performance characteristics ( points won of 0–4 rally length, baseline points won and first serve points won) as strongly associated with success. Consequently, we propose that the PWOL method is valid for identifying performance characteristics associated with success in tennis, and is therefore a suitable alternative to more complex statistical methods, as it is simpler to calculate, interpret and contextualise.


2017 ◽  
Vol 801 ◽  
pp. 012030 ◽  
Author(s):  
A S Sinaga ◽  
A S Girsang
Keyword(s):  

2019 ◽  
Vol 3 (2) ◽  
pp. 10
Author(s):  
Ardalan Husin Awlla

In this period of computerization, schooling has additionally remodeled itself and is not restrained to old lecture technique. The everyday quest is on to discover better approaches to make it more successful and productive for students. These days, masses of data are gathered in educational databases, however it stays unutilized. To be able to get required advantages from such major information, effective tools are required. Data mining is a developing capable tool for examination and expectation. It is effectively applied in the field of fraud detection, marketing, promoting, forecast and loan assessment. However, it is in incipient stage in the area of education. In this paper, data mining techniques have been applied to construct a classification model to predict the performance of students.


Author(s):  
Ladjel Bellatreche ◽  
Mukesh Mohania

Recently, organizations have increasingly emphasized applications in which current and historical data are analyzed and explored comprehensively, identifying useful trends and creating summaries of the data in order to support high-level decision making. Every organization keeps accumulating data from different functional units, so that they can be analyzed (after integration), and important decisions can be made from the analytical results. Conceptually, a data warehouse is extremely simple. As popularized by Inmon (1992), it is a “subject-oriented, integrated, time-invariant, non-updatable collection of data used to support management decision-making processes and business intelligence”. A data warehouse is a repository into which are placed all data relevant to the management of an organization and from which emerge the information and knowledge needed to effectively manage the organization. This management can be done using data-mining techniques, comparisons of historical data, and trend analysis. For such analysis, it is vital that (1) data should be accurate, complete, consistent, well defined, and time-stamped for informational purposes; and (2) data should follow business rules and satisfy integrity constraints. Designing a data warehouse is a lengthy, time-consuming, and iterative process. Due to the interactive nature of a data warehouse application, having fast query response time is a critical performance goal. Therefore, the physical design of a warehouse gets the lion’s part of research done in the data warehousing area. Several techniques have been developed to meet the performance requirement of such an application, including materialized views, indexing techniques, partitioning and parallel processing, and so forth. Next, we briefly outline the architecture of a data warehousing system.


Sign in / Sign up

Export Citation Format

Share Document