scholarly journals Completeness, accuracy, and computability of National Quality Forum-specified eMeasures

2014 ◽  
Vol 22 (2) ◽  
pp. 409-416 ◽  
Author(s):  
Andy Amster ◽  
Joseph Jentzsch ◽  
Ham Pasupuleti ◽  
K G Subramanian

Abstract Objective To analyze the completeness, computability, and accuracy of specifications for five National Quality Forum-specified (NQF) eMeasures spanning ambulatory, post-discharge, and emergency care within a comprehensive, integrated electronic health record (EHR) environment. Materials and methods To evaluate completeness, we assessed eMeasure logic, data elements, and value sets. To evaluate computability, we assessed the translation of eMeasure algorithms to programmable logic constructs and the availability of EHR data elements to implement specified data criteria, using a de-identified clinical data set from Kaiser Permanente Northwest. To assess accuracy, we compared eMeasure results with those obtained independently by existing audited chart abstraction methods used for external and internal reporting. Results One measure specification was incomplete; missing applicable LOINC codes rendered it non-computable. For three of four computable measures, data availability issues occurred; the literal specification guidance for a data element differed from the physical implementation of the data element in the EHR. In two cases, cross-referencing specified data elements to EHR equivalents allowed variably accurate measure computation. Substantial data availability issues occurred for one of the four computable measures, producing highly inaccurate results. Discussion Existing clinical workflows, documentation, and coding in the EHR were significant barriers to implementing eMeasures as specified. Implementation requires redesigning business or clinical practices and, for one measure, systemic EHR modifications, including clinical text search capabilities. Conclusions Five NQF eMeasures fell short of being machine-consumable specifications. Both clinical domain and technological expertise are required to implement manually intensive steps from data mapping to text mining to EHR-specific eMeasure implementation.

Author(s):  
Joseph Travers ◽  
Crystal Campitelli ◽  
Richard Light ◽  
Eric De Sa ◽  
Julie Stabile ◽  
...  

IntroductionThe professional regulation sector is moving toward risk-informed approaches that require high quality data. A key component of a corporate 2017 Data Strategy is the implementation of a data inventory and mapping project to catalogue, centralize, document and govern data assets that support regulatory decisions, programs and operations. Objectives and ApproachIn a data rich organization, the goals of the data inventory are to: enhance authoritative data that support programs; identify data duplications/gaps; identify data sources, owners and users; and, apply consistent data management and standards organizationally. Routinely used data assets outside the large enterprise workflow system (excel/word files; databases; paper collections) were catalogued. Using data governance principles and a facilitated questionnaire, departmental data stewards were interviewed about their generated data. Questions included data purpose/sources/types/formats/owners, retention rates, analytical products, gaps and visions for a desired data state. A data mapping methodology highlighted data set and variable connections within and across departments. ResultsTo date, over 40 staff members in 10 departments were identified as data content experts. In addition to data in the corporate enterprise system, over 80 unique datasets were identified. In 1 large department, over 2,000 data elements across 26 datasets were inventoried. Data mapping analysis revealed thematic data domains, including member demographics, outcomes, certifications, tracking and financial data, collected and held in multiple formats ((Microsoft Access, Excel, Word), SPSS, PDF, e-mails and paper documents). While 72% of the data elements were formatted numerically, approximately 8% were free text. Significant data redundancies across staff members and departments were revealed, as well as unstandardized variable naming conventions. Gaps analysis highlighted need for standardized, electronic data, where not available and data management training. Conclusion/ImplicationsCustomized data mapping reports to data users will facilitate the development of local, standardized departmental data hubs that will centrally link to a centralized data repository to facilitate seamless organization-wide analytics, improvements in current data management practices and greater data collaboration with the ultimate goal of supporting risk-informed approaches.


2016 ◽  
Vol 24 (3) ◽  
pp. 503-512
Author(s):  
Jill Boylston Herndon ◽  
Krishna Aravamudhan ◽  
Ronald L Stephenson ◽  
Ryan Brandon ◽  
Jesley Ruff ◽  
...  

Objective: To describe the stakeholder-engaged processes used to develop, specify, and validate 2 oral health care electronic clinical quality measures. Materials and Methods: A broad range of stakeholders were engaged from conception through testing to develop measures and test feasibility, reliability, and validity following National Quality Forum guidance. We assessed data element feasibility through semistructured interviews with key stakeholders using a National Quality Forum–recommended scorecard. We created test datasets of synthetic patients to test measure implementation feasibility and reliability within and across electronic health record (EHR) systems. We validated implementation with automated reporting of EHR clinical data against manual record reviews, using the kappa statistic. Results: A stakeholder workgroup was formed and guided all development and testing processes. All critical data elements passed feasibility testing. Four test datasets, representing 577 synthetic patients, were developed and implemented within EHR vendors’ software, demonstrating measure implementation feasibility. Measure reliability and validity were established through implementation at clinical practice sites, with kappa statistic values in the “almost perfect” agreement range of 0.80–0.99 for all but 1 measure component, which demonstrated “substantial” agreement. The 2 validated measures were published in the United States Health Information Knowledgebase. Conclusion: The stakeholder-engaged processes used in this study facilitated a successful measure development and testing cycle. Engaging stakeholders early and throughout development and testing promotes early identification of and attention to potential threats to feasibility, reliability, and validity, thereby averting significant resource investments that are unlikely to be fruitful.


2020 ◽  
Vol 47 (3) ◽  
pp. 547-560 ◽  
Author(s):  
Darush Yazdanfar ◽  
Peter Öhman

PurposeThe purpose of this study is to empirically investigate determinants of financial distress among small and medium-sized enterprises (SMEs) during the global financial crisis and post-crisis periods.Design/methodology/approachSeveral statistical methods, including multiple binary logistic regression, were used to analyse a longitudinal cross-sectional panel data set of 3,865 Swedish SMEs operating in five industries over the 2008–2015 period.FindingsThe results suggest that financial distress is influenced by macroeconomic conditions (i.e. the global financial crisis) and, in particular, by various firm-specific characteristics (i.e. performance, financial leverage and financial distress in previous year). However, firm size and industry affiliation have no significant relationship with financial distress.Research limitationsDue to data availability, this study is limited to a sample of Swedish SMEs in five industries covering eight years. Further research could examine the generalizability of these findings by investigating other firms operating in other industries and other countries.Originality/valueThis study is the first to examine determinants of financial distress among SMEs operating in Sweden using data from a large-scale longitudinal cross-sectional database.


2010 ◽  
Vol 17 (8) ◽  
pp. 1989-1994 ◽  
Author(s):  
Lee G. Wilke ◽  
Karla V. Ballman ◽  
Linda M. McCall ◽  
Armando E. Giuliano ◽  
Pat W. Whitworth ◽  
...  

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Jinghui Liu ◽  
Daniel Capurro ◽  
Anthony Nguyen ◽  
Karin Verspoor

AbstractAs healthcare providers receive fixed amounts of reimbursement for given services under DRG (Diagnosis-Related Groups) payment, DRG codes are valuable for cost monitoring and resource allocation. However, coding is typically performed retrospectively post-discharge. We seek to predict DRGs and DRG-based case mix index (CMI) at early inpatient admission using routine clinical text to estimate hospital cost in an acute setting. We examined a deep learning-based natural language processing (NLP) model to automatically predict per-episode DRGs and corresponding cost-reflecting weights on two cohorts (paid under Medicare Severity (MS) DRG or All Patient Refined (APR) DRG), without human coding efforts. It achieved macro-averaged area under the receiver operating characteristic curve (AUC) scores of 0·871 (SD 0·011) on MS-DRG and 0·884 (0·003) on APR-DRG in fivefold cross-validation experiments on the first day of ICU admission. When extended to simulated patient populations to estimate average cost-reflecting weights, the model increased its accuracy over time and obtained absolute CMI error of 2·40 (1·07%) and 12·79% (2·31%), respectively on the first day. As the model could adapt to variations in admission time, cohort size, and requires no extra manual coding efforts, it shows potential to help estimating costs for active patients to support better operational decision-making in hospitals.


2011 ◽  
Vol 53 (6) ◽  
pp. 110S
Author(s):  
Benjamin S. Brooke ◽  
Ying Wei Lum ◽  
Timothy M. Pawlik ◽  
Peter J. Pronovost ◽  
Bruce A. Perler ◽  
...  

Author(s):  
Eugenia Rinaldi ◽  
Sylvia Thun

HiGHmed is a German Consortium where eight University Hospitals have agreed to the cross-institutional data exchange through novel medical informatics solutions. The HiGHmed Use Case Infection Control group has modelled a set of infection-related data in the openEHR format. In order to establish interoperability with the other German Consortia belonging to the same national initiative, we mapped the openEHR information to the Fast Healthcare Interoperability Resources (FHIR) format recommended within the initiative. FHIR enables fast exchange of data thanks to the discrete and independent data elements into which information is organized. Furthermore, to explore the possibility of maximizing analysis capabilities for our data set, we subsequently mapped the FHIR elements to the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM). The OMOP data model is designed to support the conduct of research to identify and evaluate associations between interventions and outcomes caused by these interventions. Mapping across standard allows to exploit their peculiarities while establishing and/or maintaining interoperability. This article provides an overview of our experience in mapping infection control related data across three different standards openEHR, FHIR and OMOP CDM.


Sign in / Sign up

Export Citation Format

Share Document