Applying FAIR Principles to Improve Data Searchability of Emergency Department Datasets: A Case Study for HCUP-SEDD

2020 ◽  
Vol 59 (01) ◽  
pp. 048-056
Author(s):  
Karishma Bhatia ◽  
James Tanch ◽  
Elizabeth S. Chen ◽  
Indra Neil Sarkar

Abstract Background There is a recognized need to improve how scholarly data are managed and accessed. The scientific community has proposed the findable, accessible, interoperable, and reusable (FAIR) data principles to address this issue. Objective The objective of this case study was to develop a system for improving the FAIRness of Healthcare Cost and Utilization Project's State Emergency Department Databases (HCUP's SEDD) within the context of data catalog availability. Methods A search tool, EDCat (Emergency Department Catalog), was designed to improve the “FAIRness” of electronic health databases and tested on datasets from HCUP-SEDD. ElasticSearch was used as a database for EDCat's search engine. Datasets were curated and defined. Searchable data dictionary-related elements and unified medical language system (UMLS) concepts were included in the curated metadata. Functionality to standardize search terms using UMLS concepts was added to the user interface. Results The EDCat system improved the overall FAIRness of HCUP-SEDD by improving the findability of individual datasets and increasing the efficacy of searches for specific data elements and data types. Discussion The databases considered for this case study were limited in number as few data distributors make the data dictionaries of datasets available. The publication of data dictionaries should be encouraged through the FAIR principles, and further efforts should be made to improve the specificity and measurability of the FAIR principles. Conclusion In this case study, the distribution of datasets from HCUP-SEDD was made more FAIR through the development of a search tool, EDCat. EDCat will be evaluated and developed further to include datasets from other sources.

2005 ◽  
Vol 15 (03) ◽  
pp. 337-352 ◽  
Author(s):  
THOMAS NITSCHE

Data distributions are an abstract notion for describing parallel programs by means of overlapping data structures. A generic data distribution layer serves as a basis for implementing specific data distributions over arbitrary algebraic data types and arrays as well as generic skeletons. The necessary communication operations for exchanging overlapping data elements are derived automatically from the specification of the overlappings. This paper describes how the communication operations used internally by the generic skeletons are derived, especially for the asynchronous and synchronous communication scheduling. As a case study, we discuss the iterative solution of PDEs and compare a hand-coded MPI version with a skeletal one based on overlapping data distributions.


2015 ◽  
Vol 22 (3) ◽  
pp. 529-535 ◽  
Author(s):  
James C McClay ◽  
Peter J Park ◽  
Mark G Janczewski ◽  
Laura Heermann Langford

Abstract Background Emergency departments in the United States service over 130 million visits per year. The demands for information from these visits require interoperable data exchange standards. While multiple data exchange specifications are in use, none have undergone rigorous standards review. This paper describes the creation and balloting of the Health Level Seven (HL7) Data Elements for Emergency Department Systems (DEEDS). Methods Existing data exchange specifications were collected and organized into categories reflecting the workflow of emergency care. The concepts were then mapped to existing standards for vocabulary, data types, and the HL7 information model. The HL7 community then processed the specification through the normal balloting process addressing all comments and concerns. The resulting specification was then submitted for publication as an HL7 informational standard. Results The resulting specification contains 525 concepts related to emergency care required for operations and reporting to external agencies. An additional 200 of the most commonly ordered laboratory tests were included. Each concept was given a unique identifier and mapped to Logical Observation Identifiers, Names, and Codes (LOINC). HL7 standard data types were applied. Discussion The HL7 DEEDS specification represents the first set of common ED related data elements to undergo rigorous standards development. The availability of this standard will contribute to improved interoperability of emergency care data.


CJEM ◽  
2017 ◽  
Vol 20 (4) ◽  
pp. 532-538 ◽  
Author(s):  
Lucas B. Chartier ◽  
Antonia S. Stang ◽  
Samuel Vaillancourt ◽  
Amy H. Y. Cheng

ABSTRACTThe topics of quality improvement (QI) and patient safety have become important themes in health care in recent years, particularly in the emergency department setting, which is a frequent point of contact with the health care system for patients. In the first of three articles in this series meant as a QI primer for emergency medicine clinicians, we introduced the strategic planning required to develop an effective QI project using a fictional case study as an example. In this second article we continue with our example of improving time to antibiotics for patients with sepsis, and introduce the Model for Improvement. We will review what makes a good aim statement, the various categories of measures that can be tracked during a QI project, and the relative merits and challenges of potential change concepts and ideas. We will also present the Model for Improvement’s rapid-cycle change methodology, the Plan-Do-Study-Act (PDSA) cycle. The final article in this series will focus on the evaluation and sustainability of QI projects.


1998 ◽  
Vol 24 (1) ◽  
pp. 35-44 ◽  
Author(s):  
DEEDS Writing Committee ◽  
Daniel A. Pollock ◽  
Diane L. Adams ◽  
Lisa Marie Bernardo ◽  
Vicky Bradley ◽  
...  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Gianluca Solazzo ◽  
Ylenia Maruccia ◽  
Gianluca Lorenzo ◽  
Valentina Ndou ◽  
Pasquale Del Vecchio ◽  
...  

Purpose This paper aims to highlight how big social data (BSD) and analytics exploitation may help destination management organisations (DMOs) to understand tourist behaviours and destination experiences and images. Gathering data from two different sources, Flickr and Twitter, textual and visual contents are used to perform different analytics tasks to generate insights on tourist behaviour and the affective aspects of the destination image. Design/methodology/approach This work adopts a method based on a multimodal approach on BSD and analytics, considering multiple BSD sources, different analytics techniques on heterogeneous data types, to obtain complementary results on the Salento region (Italy) case study. Findings Results show that the generated insights allow DMOs to acquire new knowledge about discovery of unknown clusters of points of interest, identify trends and seasonal patterns of tourist demand, monitor topic and sentiment and identify attractive places. DMOs can exploit insights to address its needs in terms of decision support for the management and development of the destination, the enhancement of destination attractiveness, the shaping of new marketing and communication strategies and the planning of tourist demand within the destination. Originality/value The originality of this work is in the use of BSD and analytics techniques for giving DMOs specific insights on a destination in a deep and wide fashion. Collected data are used with a multimodal analytic approach to build tourist characteristics, images, attitudes and preferred destination attributes, which represent for DMOs a unique mean for problem-solving, decision-making, innovation and prediction.


2008 ◽  
Vol 21 (2) ◽  
pp. 120-130 ◽  
Author(s):  
Joseph S. Guarisco ◽  
Stefoni A. Bavin

PurposeThe purpose of this paper is to provide a case study testing the Primary Provider Theory proposed by Aragon that states that: disproportionate to any other variables, patient satisfaction is distinctly and primarily linked to physician behaviors and secondarily to waiting times.Design/methodology/approachThe case study began by creating incentives motivating physicians to reflect and improve behaviors (patient interactions) and practice patterns (workflow efficiency). The Press Ganey Emergency Department Survey was then utilized to track the impact of the incentive programs and to ascertain any relationship between patient satisfaction with the provider and global patient satisfaction with emergency department visits by measuring patient satisfaction over an eight quarter period.FindingsThe findings were two‐fold: firstly, the concept of “pay for performance” as a tool for physician motivation was valid; and secondly, the impact on global patient satisfaction by increases in patient satisfaction with the primary provider was significant and highly correlated, as proposed by Aragon.Practical implicationsThese findings can encourage hospitals and physician groups to place a high value on the performance of primary providers of patient care, provide incentives for appropriate provider behaviors through “pay for performance” programs and promote physician understanding of the links between global patient satisfaction with physician behaviors and business growth, malpractice reduction, and other key measures of business success.Originality/valueThere are no other case studies prior to this project validating the Primary Provider Theory in an urban medical center; this project adds to the validity and credibility of the theory in this setting.


2021 ◽  
Vol 156 (Supplement_1) ◽  
pp. S126-S126
Author(s):  
C Attaway ◽  
F El-Sharkawy Navarro ◽  
M Richard-Greenblatt ◽  
S Herlihy ◽  
C Gentile ◽  
...  

Abstract Introduction/Objective Nasopharyngeal (NP) swabs have been the traditional specimen source used for testing for respiratory viruses. However, at the start of the COVID-19 pandemic, several studies suggested that saliva could also be used as a specimen source for testing for SARS-CoV-2. Despite potential benefits, there was limited data on the characteristics of this specimen type and few commercial assays with FDA emergency use authorization allowed saliva as a specimen source. In order to explore the feasibility and validate using saliva as a specimen source for ambulatory and emergency department patients we designed a study to compare saliva to NP swabs for SARS-CoV-2 testing. Methods/Case Report Specimens were collected in the emergency department and ambulatory testing sites between May 6, 2020-July 7, 2020. Nasopharyngeal swabs were collected as part of routine clinical practice and patients were given written instructions to self-collect 1mL of saliva into a sterile specimen cup with or without a straw. SARS-CoV-2 testing was performed in parallel with both specimen types using the TaqPath COVID-19 Combo Kit (Thermo Fisher Waltham, MA). Saliva was diluted 1:1 in saline prior to testing. Specimens were transported to the lab at 4C and frozen at -80C prior to testing. Results (if a Case Study enter NA) Seventy-four patients had both an NP swab and saliva tested in this study. Thirty of the 74 patients (41%) were unable to produce the full 1mL of saliva requested, but all samples had sufficient volume for testing after dilution. There were 34 positive samples obtained with an 82% positive agreement between the NP swabs and saliva. In 6 cases, the NP swab was positive, and the paired saliva was negative. In 1 case, only the saliva was positive. The average Ct of the positive NP swabs with a paired negative saliva sample was 39.6. There was only a single invalid test for one of the saliva samples. Conclusion Saliva was a straightforward sample to collect and test for SARS-CoV-2. Challenges included obtaining sufficient sample and a less predictable matrix that required dilution to ensure proper pipeting. In this study, NP swabs were more sensitive for detection of SARS-CoV-2. Paired saliva was more often negative in patients shedding small amounts of SARS-CoV-2 based on a high Ct of the positive NP sample.


The previous chapter overviewed big data including its types, sources, analytic techniques, and applications. This chapter briefly discusses the architecture components dealing with the huge volume of data. The complexity of big data types defines a logical architecture with layers and high-level components to obtain a big data solution that includes data sources with the relation to atomic patterns. The dimensions of the approach include volume, variety, velocity, veracity, and governance. The diverse layers of the architecture are big data sources, data massaging and store layer, analysis layer, and consumption layer. Big data sources are data collected from various sources to perform analytics by data scientists. Data can be from internal and external sources. Internal sources comprise transactional data, device sensors, business documents, internal files, etc. External sources can be from social network profiles, geographical data, data stores, etc. Data massage is the process of extracting data by preprocessing like removal of missing values, dimensionality reduction, and noise removal to attain a useful format to be stored. Analysis layer is to provide insight with preferred analytics techniques and tools. The analytics methods, issues to be considered, requirements, and tools are widely mentioned. Consumption layer being the result of business insight can be outsourced to sources like retail marketing, public sector, financial body, and media. Finally, a case study of architectural drivers is applied on a retail industry application and its challenges and usecases are discussed.


Sign in / Sign up

Export Citation Format

Share Document