scholarly journals Primary care risk stratification in COPD using routinely collected data: a secondary data analysis

Author(s):  
Matthew Johnson ◽  
Lucy Rigge ◽  
David Culliford ◽  
Lynn Josephs ◽  
Mike Thomas ◽  
...  

AbstractMost clinical contacts with chronic obstructive pulmonary disease (COPD) patients take place in primary care, presenting opportunity for proactive clinical management. Electronic health records could be used to risk stratify diagnosed patients in this setting, but may be limited by poor data quality or completeness. We developed a risk stratification database algorithm using the DOSE index (Dyspnoea, Obstruction, Smoking and Exacerbation) with routinely collected primary care data, aiming to calculate up to three repeated risk scores per patient over five years, each separated by at least one year. Among 10,393 patients with diagnosed COPD, sufficient primary care data were present to calculate at least one risk score for 77.4%, and the maximum of three risk scores for 50.6%. Linked secondary care data revealed primary care under-recording of hospital exacerbations, which translated to a slight, non-significant cohort average risk score reduction, and an understated risk group allocation for less than 1% of patients. Algorithmic calculation of the DOSE index is possible using primary care data, and appears robust to the absence of linked secondary care data, if unavailable. The DOSE index appears a simple and practical means of incorporating risk stratification into the routine primary care of COPD patients, but further research is needed to evaluate its clinical utility in this setting. Although secondary analysis of routinely collected primary care data could benefit clinicians, patients and the health system, standardised data collection and improved data quality and completeness are also needed.

2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Carly A. Conran ◽  
Zhuqing Shi ◽  
William Kyle Resurreccion ◽  
Rong Na ◽  
Brian T. Helfand ◽  
...  

Abstract Background Genome-wide association studies have identified thousands of disease-associated single nucleotide polymorphisms (SNPs). A subset of these SNPs may be additively combined to generate genetic risk scores (GRSs) that confer risk for a specific disease. Although the clinical validity of GRSs to predict risk of specific diseases has been well established, there is still a great need to determine their clinical utility by applying GRSs in primary care for cancer risk assessment and targeted intervention. Methods This clinical study involved 281 primary care patients without a personal history of breast, prostate or colorectal cancer who were 40–70 years old. DNA was obtained from a pre-existing biobank at NorthShore University HealthSystem. GRSs for colorectal cancer and breast or prostate cancer were calculated and shared with participants through their primary care provider. Additional data was gathered using questionnaires as well as electronic medical record information. A t-test or Chi-square test was applied for comparison of demographic and key clinical variables among different groups. Results The median age of the 281 participants was 58 years and the majority were female (66.6%). One hundred one (36.9%) participants received 2 low risk scores, 99 (35.2%) received 1 low risk and 1 average risk score, 37 (13.2%) received 1 low risk and 1 high risk score, 23 (8.2%) received 2 average risk scores, 21 (7.5%) received 1 average risk and 1 high risk score, and no one received 2 high risk scores. Before receiving GRSs, younger patients and women reported significantly more worry about risk of developing cancer. After receiving GRSs, those who received at least one high GRS reported significantly more worry about developing cancer. There were no significant differences found between gender, age, or GRS with regards to participants’ reported optimism about their future health neither before nor after receiving GRS results. Conclusions Genetic risk scores that quantify an individual’s risk of developing breast, prostate and colorectal cancers as compared with a race-defined population average risk have potential clinical utility as a tool for risk stratification and to guide cancer screening in a primary care setting.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
David A. Dorr ◽  
Rachel L. Ross ◽  
Deborah Cohen ◽  
Devan Kansagara ◽  
Katrina Ramsey ◽  
...  

Abstract Background Patients with complex health care needs may suffer adverse outcomes from fragmented and delayed care, reducing well-being and increasing health care costs. Health reform efforts, especially those in primary care, attempt to mitigate risk of adverse outcomes by better targeting resources to those most in need. However, predicting who is susceptible to adverse outcomes, such as unplanned hospitalizations, ED visits, or other potentially avoidable expenditures, can be difficult, and providing intensive levels of resources to all patients is neither wanted nor efficient. Our objective was to understand if primary care teams can predict patient risk better than standard risk scores. Methods Six primary care practices risk stratified their entire patient population over a 2-year period, and worked to mitigate risk for those at high risk through care management and coordination. Individual patient risk scores created by the practices were collected and compared to a common risk score (Hierarchical Condition Categories) in their ability to predict future expenditures, ED visits, and hospitalizations. Accuracy of predictions, sensitivity, positive predictive values (PPV), and c-statistics were calculated for each risk scoring type. Analyses were stratified by whether the practice used intuition alone, an algorithm alone, or adjudicated an algorithmic risk score. Results In all, 40,342 patients were risk stratified. Practice scores had 38.6% agreement with HCC scores on identification of high-risk patients. For the 3,381 patients with reliable outcomes data, accuracy was high (0.71–0.88) but sensitivity and PPV were low (0.16–0.40). Practice-created scores had 0.02–0.14 lower sensitivity, specificity and PPV compared to HCC in prediction of outcomes. Practices using adjudication had, on average, .16 higher sensitivity. Conclusions Practices using simple risk stratification techniques had slightly worse accuracy in predicting common outcomes than HCC, but adjudication improved prediction.


2020 ◽  
Author(s):  
Carly Ann Conran ◽  
Zhuqing Shi ◽  
William Kyle Resurreccion ◽  
Rong Na ◽  
Brian T. Helfand ◽  
...  

Abstract Background: Genome-wide association studies have identified thousands of disease-associated single nucleotide polymorphisms (SNPs). A subset of these SNPs may be additively combined to generate genetic risk scores (GRSs) that confer risk for a specific disease. Although the clinical validity of GRSs to predict risk of specific diseases has been well established, there is still a great need to determine their clinical utility by applying GRSs in primary care for cancer risk assessment and targeted intervention.Methods: This clinical study involved 281 primary care patients without a personal history of breast, prostate or colorectal cancer who were 40-70 years old. DNA was obtained from a pre-existing biobank at NorthShore University HealthSystem. GRSs for colorectal cancer and breast or prostate cancer were calculated and shared with participants through their primary care provider. Additional data was gathered using questionnaires as well as electronic medical record information. A t-test or Chi-square test was applied for comparison of demographic and key clinical variables among different groups.Results: The median age of the 281 participants was 58 years and the majority were female (66.6%). One hundred one (36.9%) participants received 2 low risk scores, 99 (35.2%) received 1 low risk and 1 average risk score, 37 (13.2%) received 1 low risk and 1 high risk score, 23 (8.2%) received 2 average risk scores, 21 (7.5%) received 1 average risk and 1 high risk score, and no one received 2 high risk scores. Before receiving GRSs, younger patients and women reported significantly more worry about risk of developing cancer. After receiving GRSs, those who received at least one high GRS reported significantly more worry about developing cancer. There were no significant differences found between gender, age, or GRS with regards to participants’ reported optimism about their future health neither before nor after receiving GRS results.Conclusions: Genetic risk scores that quantify an individual’s risk of developing breast, prostate and colorectal cancers as compared with a race-defined population average risk have potential clinical utility as a tool for risk stratification and to guide cancer screening in a primary care setting.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Cristina Mannie ◽  
Hadi Kharrazi

Abstract Background Comorbidities are strong predictors of current and future healthcare needs and costs; however, comorbidities are not evenly distributed geographically. A growing need has emerged for comorbidity surveillance that can inform decision-making. Comorbidity-derived risk scores are increasingly being used as valuable measures of individual health to describe and explain disease burden in populations. Methods This study assessed the geographical distribution of comorbidity and its associated financial implications among commercially insured individuals in South Africa (SA). A retrospective, cross-sectional analysis was performed comparing the geographical distribution of comorbidities for 2.6 million commercially insured individuals over 2016–2017, stratified by geographical districts in SA. We applied the Johns Hopkins ACG® System across the insurance claims data of a large health plan administrator in SA to measure comorbidity as a risk score for each individual. We aggregated individual risk scores to determine the average risk score per district, also known as the comorbidity index (CMI), to describe the overall disease burden of each district. Results We observed consistently high CMI scores in districts of the Free State and KwaZulu-Natal provinces for all population groups before and after age adjustment. Some areas exhibited almost 30% higher healthcare utilization after age adjustment. Districts in the Northern Cape and Limpopo provinces had the lowest CMI scores with 40% lower than expected healthcare utilization in some areas after age adjustment. Conclusions Our results show underlying disparities in CMI at national, provincial, and district levels. Use of geo-level CMI scores, along with other social data affecting health outcomes, can enable public health departments to improve the management of disease burdens locally and nationally. Our results could also improve the identification of underserved individuals, hence bridging the gap between public health and population health management efforts.


Author(s):  
R. Rozemeijer ◽  
W. P. van Bezouwen ◽  
N. D. van Hemert ◽  
J. A. Damen ◽  
S. Koudstaal ◽  
...  

Abstract Background Multiple scores have been proposed to guide risk stratification after percutaneous coronary intervention. This study assessed the performance of the PRECISE-DAPT, PARIS and CREDO-Kyoto risk scores to predict post-discharge ischaemic or bleeding events. Methods A total of 1491 patients treated with latest-generation drug-eluting stent implantation were evaluated. Risk scores for post-discharge ischaemic or bleeding events were calculated and directly compared. Prognostic performance of both risk scores was assessed with calibration, Harrell’s c‑statistics net reclassification index and decision curve analyses. Results Post-discharge ischaemic events occurred in 56 patients (3.8%) and post-discharge bleeding events in 34 patients (2.3%) within the first year after the invasive procedure. C‑statistics for the PARIS ischaemic risk score was marginal (0.59, 95% confidence interval (CI) 0.51–0.68), whereas the CREDO-Kyoto ischaemic risk score was moderate (0.68, 95% CI 0.60–0.75). With regard to post-discharge bleeding events, CREDO-Kyoto displayed moderate discrimination (c-statistic 0.67, 95% CI 0.56–0.77), whereas PRECISE-DAPT (0.59, 95% CI 0.48–0.69) and PARIS (0.55, 95% CI 0.44–0.65) had a marginal discriminative capacity. Net reclassification index and decision curve analysis favoured CREDO-Kyoto-derived bleeding risk assessment. Conclusion In this contemporary all-comer population, PARIS and PRECISE-DAPT risk scores were not resilient to independent testing for post-discharge bleeding events. CREDO-Kyoto-derived risk stratification was associated with a moderate predictive capability for post-discharge ischaemic or bleeding events. Future studies are warranted to improve risk stratification with more focus on robustness and rigorous testing.


2018 ◽  
Vol 31 (5) ◽  
pp. 653-660 ◽  
Author(s):  
Rachel C. Ambagtsheer ◽  
Justin Beilby ◽  
Julia Dabravolskaj ◽  
Marjan Abbasi ◽  
Mandy M. Archibald ◽  
...  

2010 ◽  
Vol 103 (05) ◽  
pp. 968-975 ◽  
Author(s):  
Alessandro Filippi ◽  
Marianna Alacqua ◽  
Warren Cowell ◽  
Annabelle Shakespeare ◽  
Lorenzo Mantovani ◽  
...  

SummaryThe aims of this study were to investigate trends in the incidence of diagnosed atrial fibrillation (AF), and to identify factors associated with the prescription of antithrombotics (ATs) and to identify the persistence of patients with oral anticoagulant (OAC) treatment in primary care. Data were obtained from 400 Italian primary care physicians providing information to the Health Search/Thales Database from 2001 to 2004. The age-standardised incidence of AF was: 3.9–3.0 cases, and 3.6–3.0 cases per 1,000 person-years in males and females, respectively. During the study period, 2,016 (37.2%) patients had no prescription, 1,663 (30.7%) were prescribed an antiplatelet (AP) agent, 1,440 (26.6%) were prescribed an OAC and 301 (5.5%) had both prescriptions. The date of diagnosis (p = 0.0001) affected the likelihood of receiving an OAC. AP, but not OAC, use significantly increased with a worsening stroke risk profile using the CHADS2 risk score. Older age increased the probability (p < 0.0001) of receiving an AP, but not an OAC. Approximately 42% and 24% of patients persisted with OAC treatment at one and two years, respectively, the remainder interrupted or discontinued their treatment. Underuse and discontinuation of OAC treatment is common in incident AF patients. Risk stratification only partially influences AT management.


2018 ◽  
Vol 31 (3) ◽  
pp. 203-213
Author(s):  
Yvonne Mei Fong Lim ◽  
Maryati Yusof ◽  
Sheamini Sivasampu

Purpose The purpose of this paper is to assess National Medical Care Survey data quality. Design/methodology/approach Data completeness and representativeness were computed for all observations while other data quality measures were assessed using a 10 per cent sample from the National Medical Care Survey database; i.e., 12,569 primary care records from 189 public and private practices were included in the analysis. Findings Data field completion ranged from 69 to 100 per cent. Error rates for data transfer from paper to web-based application varied between 0.5 and 6.1 per cent. Error rates arising from diagnosis and clinical process coding were higher than medication coding. Data fields that involved free text entry were more prone to errors than those involving selection from menus. The authors found that completeness, accuracy, coding reliability and representativeness were generally good, while data timeliness needs to be improved. Research limitations/implications Only data entered into a web-based application were examined. Data omissions and errors in the original questionnaires were not covered. Practical implications Results from this study provided informative and practicable approaches to improve primary health care data completeness and accuracy especially in developing nations where resources are limited. Originality/value Primary care data quality studies in developing nations are limited. Understanding errors and missing data enables researchers and health service administrators to prevent quality-related problems in primary care data.


Author(s):  
Olga Kostopoulou ◽  
Christopher Tracey ◽  
Brendan C Delaney

AbstractObjectiveRoutine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.Materials and MethodsWe used the clinical documentation of 34 UK general practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician’s final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.ResultsSupported documentation contained significantly more codes (incidence rate ratio [IRR] = 5.76 [4.31, 7.70] P &lt; .001) and less free text (IRR = 0.32 [0.27, 0.40] P &lt; .001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b = −0.08 [−0.11, −0.05] P &lt; .001) in the supported consultations, and this was the case for both codes and free text.ConclusionsWe provide evidence that data entry in the EHR is incomplete and reflects physicians’ cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.


Sign in / Sign up

Export Citation Format

Share Document