temporal validation
Recently Published Documents


TOTAL DOCUMENTS

117
(FIVE YEARS 70)

H-INDEX

11
(FIVE YEARS 3)

2022 ◽  
Vol 8 ◽  
Author(s):  
Chien-Liang Liu ◽  
You-Lin Tain ◽  
Yun-Chun Lin ◽  
Chien-Ning Hsu

ObjectiveThis study aimed to identify phenotypic clinical features associated with acute kidney injury (AKI) to predict non-recovery from AKI at hospital discharge using electronic health record data.MethodsData for hospitalized patients in the AKI Recovery Evaluation Study were derived from a large healthcare delivery system in Taiwan between January 2011 and December 2017. Living patients with AKI non-recovery were used to derive and validate multiple predictive models. In total, 64 candidates variables, such as demographic characteristics, comorbidities, healthcare services utilization, laboratory values, and nephrotoxic medication use, were measured within 1 year before the index admission and during hospitalization for AKI.ResultsAmong the top 20 important features in the predictive model, 8 features had a positive effect on AKI non-recovery prediction: AKI during hospitalization, serum creatinine (SCr) level at admission, receipt of dialysis during hospitalization, baseline comorbidity of cancer, AKI at admission, baseline lymphocyte count, baseline potassium, and low-density lipoprotein cholesterol levels. The predicted AKI non-recovery risk model using the eXtreme Gradient Boosting (XGBoost) algorithm achieved an area under the receiver operating characteristic (AUROC) curve statistic of 0.807, discrimination with a sensitivity of 0.724, and a specificity of 0.738 in the temporal validation cohort.ConclusionThe machine learning model approach can accurately predict AKI non-recovery using routinely collected health data in clinical practice. These results suggest that multifactorial risk factors are involved in AKI non-recovery, requiring patient-centered risk assessments and promotion of post-discharge AKI care to prevent AKI complications.


Atmosphere ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 143
Author(s):  
Hamed Hafizi ◽  
Ali Arda Sorman

Precipitation measurement with high spatial and temporal resolution over highly elevated and complex terrain in the eastern part of Turkey is an essential task to manage the water structures in an optimum manner. The objective of this study is to evaluate the consistency and hydrologic utility of 13 Gridded Precipitation Datasets (GPDs) (CPCv1, MSWEPv2.8, ERA5, CHIRPSv2.0, CHIRPv2.0, IMERGHHFv06, IMERGHHEv06, IMERGHHLv06, TMPA-3B42v7, TMPA-3B42RTv7, PERSIANN-CDR, PERSIANN-CCS, and PERSIANN) over a mountainous test basin (Karasu) at a daily time step. The Kling-Gupta Efficiency (KGE), including its three components (correlation, bias, and variability ratio), and the Nash-Sutcliffe Efficiency (NSE) are used for GPD evaluation. Moreover, the Hanssen-Kuiper (HK) score is considered to evaluate the detectability strength of selected GPDs for different precipitation events. Precipitation frequencies are evaluated considering the Probability Density Function (PDF). Daily precipitation data from 23 meteorological stations are provided as a reference for the period of 2015–2019. The TUW model is used for hydrological simulations regarding observed discharge located at the outlet of the basin. The model is calibrated in two ways, with observed precipitation only and by each GPD individually. Overall, CPCv1 shows the highest performance (median KGE; 0.46) over time and space. MSWEPv2.8 and CHIRPSv2.0 deliver the best performance among multi-source merging datasets, followed by CHIRPv2.0, whereas IMERGHHFv06, PERSIANN-CDR, and TMPA-3B42v7 show poor performance. IMERGHHLv06 is able to present the best performance (median KGE; 0.17) compared to other satellite-based GPDs (PERSIANN-CCS, PERSIANN, IMERGHHEv06, and TMPA-3B42RTv7). ERA5 performs well both in spatial and temporal validation compared to satellite-based GPDs, though it shows low performance in producing a streamflow simulation. Overall, all gridded precipitation datasets show better performance in generating streamflow when the model is calibrated by each GPD separately.


2022 ◽  
Author(s):  
Mark Ebell ◽  
Roya Hamadani ◽  
Autumn Kieber-Emmons

Importance Outpatient physicians need guidance to support their clinical decisions regarding management of patients with COVID-19, in particular whether to hospitalize a patient and if managed as an outpatient, how closely to follow them. Objective To develop and prospectively validate a clinical prediction rule to predict the likelihood of hospitalization for outpatients with COVID-19 that does not require laboratory testing or imaging. Design Derivation and temporal validation of a clinical prediction rule, and prospective validation of two externally derived clinical prediction rules. Setting Primary and Express care clinics in a Pennsylvania health system. Participants Patients 12 years and older presenting to outpatient clinics who had a positive polymerase chain reaction test for COVID-19. Main outcomes and measures Classification accuracy (percentage in each risk group hospitalized) and area under the receiver operating characteristic curve (AUC). Results Overall, 7.4% of outpatients in the early derivation cohort (5843 patients presenting before 3/1/21) and 5.5% in the late validation cohort (3806 patients presenting 3/1/21 or later) were ultimately hospitalized. We developed and temporally validated three risk scores that all included age, dyspnea, and the presence of comorbidities, adding respiratory rate for the second score and oxygen saturation for the third. All had very good overall accuracy (AUC 0.77 to 0.78) and classified over half of patients in the validation cohort as very low risk with a 1.7% or lower likelihood of hospitalization. Two externally derived risk scores identified more low risk patients, but with a higher overall risk of hospitalization (2.8%). Conclusions and relevance Simple risk scores applicable to outpatient and telehealth settings can identify patients with very low (1.6% to 1.7%), low (5.2% to 5.9%), moderate (14.7% to 15.6%), and high risk (32.0% to 34.2%) of hospitalization. The Lehigh Outpatient COVID Hospitalization (LOCH) risk score is available online as a free app: https://ebell-projects.shinyapps.io/LehighRiskScore/.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0262193
Author(s):  
Monica I. Lupei ◽  
Danni Li ◽  
Nicholas E. Ingraham ◽  
Karyn D. Baum ◽  
Bradley Benson ◽  
...  

Objective To prospectively evaluate a logistic regression-based machine learning (ML) prognostic algorithm implemented in real-time as a clinical decision support (CDS) system for symptomatic persons under investigation (PUI) for Coronavirus disease 2019 (COVID-19) in the emergency department (ED). Methods We developed in a 12-hospital system a model using training and validation followed by a real-time assessment. The LASSO guided feature selection included demographics, comorbidities, home medications, vital signs. We constructed a logistic regression-based ML algorithm to predict “severe” COVID-19, defined as patients requiring intensive care unit (ICU) admission, invasive mechanical ventilation, or died in or out-of-hospital. Training data included 1,469 adult patients who tested positive for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) within 14 days of acute care. We performed: 1) temporal validation in 414 SARS-CoV-2 positive patients, 2) validation in a PUI set of 13,271 patients with symptomatic SARS-CoV-2 test during an acute care visit, and 3) real-time validation in 2,174 ED patients with PUI test or positive SARS-CoV-2 result. Subgroup analysis was conducted across race and gender to ensure equity in performance. Results The algorithm performed well on pre-implementation validations for predicting COVID-19 severity: 1) the temporal validation had an area under the receiver operating characteristic (AUROC) of 0.87 (95%-CI: 0.83, 0.91); 2) validation in the PUI population had an AUROC of 0.82 (95%-CI: 0.81, 0.83). The ED CDS system performed well in real-time with an AUROC of 0.85 (95%-CI, 0.83, 0.87). Zero patients in the lowest quintile developed “severe” COVID-19. Patients in the highest quintile developed “severe” COVID-19 in 33.2% of cases. The models performed without significant differences between genders and among race/ethnicities (all p-values > 0.05). Conclusion A logistic regression model-based ML-enabled CDS can be developed, validated, and implemented with high performance across multiple hospitals while being equitable and maintaining performance in real-time validation.


Author(s):  
Takahiro Itaya ◽  
Yusuke Murakami ◽  
Akiko Ota ◽  
Ryo Shimomura ◽  
Tomoko Fukushima ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Juliana Foinquinos ◽  
Maria do Carmo Duarte ◽  
Jose Natal Figueiroa ◽  
Jailson B. Correia ◽  
Nara Vasconcelos Cavalcanti

Objectives. To perform a temporal validation of a predictive model for death in children with visceral leishmaniasis (VL). Methods. A temporal validation of a children-exclusive predictive model of death due to VL (Sampaio et al. 2010 model), using a retrospective cohort, hereby called validation cohort. The validation cohort convenience sample was made of 156 patients less than 15 years old hospitalized between 2008 and 2018 with VL. Patients included in the Sampaio et al. 2010 study are here denominated derivation cohort, which was composed of 546 patients hospitalized in the same hospital setting in the period from 1996 to 2006. The calibration and discriminative capacity of the model to predict death by VL in the validation cohort were then assessed through the procedure of logistic recalibration that readjusted its coefficients. The calibration of the updated model was tested using Hosmer–Lemeshow test and Spiegelhalter test. A ROC curve was built and the value of the area under this curve represented the model’s discrimination. Results. The validation cohort found a lethality of 6.4%. The Sampaio et al. 2010 model demonstrated inadequate calibration in the validation cohort (Spiegelhalter test: p = 0.007 ). It also presented unsatisfactory discriminative capacity, evaluated by the area under the ROC curve = 0.618. After the coefficient readjustment, the model showed adequate calibration (Spiegelhalter test, p = 0.988 ) and better discrimination, becoming satisfactory (AUROC = 0.762). The score developed by Sampaio et al. 2010 attributed 1 point to the variables dyspnea, associated infections, and neutrophil count <500/mm3; 2 points to jaundice and mucosal bleeding; and 3 points to platelet count <50,000/mm3. In the recalibrated model, each one of the variables had a scoring of 1 point for each. Conclusion. The temporally validated model, after coefficient readjustment, presented adequate calibration and discrimination to predict death in children hospitalized with VL.


2021 ◽  
Author(s):  
Nikolaos Mastellos ◽  
Richard Betteridge ◽  
Prasanth Peddaayyavarla ◽  
Andrew Moran ◽  
Jurgita Kaubryte ◽  
...  

BACKGROUND The impact of the COVID-19 pandemic on health care utilisation and associated costs has been significant, with one in ten patients becoming severely ill and being admitted to hospital with serious complications during the first wave of the pandemic. Risk prediction models can help health care providers identify high-risk patients in their populations and intervene to improve health outcomes and reduce associated costs. OBJECTIVE To develop and validate a hospitalisation risk prediction model for adult patients with laboratory confirmed Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). METHODS The model was developed using pre-linked and standardised data of adult patients with laboratory confirmed SARS-CoV-2 from Cerner’s population health management platform (HealtheIntent®) in the London Borough of Lewisham. A total of 14,203 patients who tested positive for SARS-CoV-2 between 1st March 2020 and 28th February 2021 were included in the development and internal validation cohorts. A second temporal validation cohort covered the period between 1st March 2021 to 30th April 2021. The outcome variable was hospital admission in adult patients with laboratory confirmed SARS-CoV-2. A generalised linear model was used to train the model. The predictive performance of the model was assessed using the area under the receiver operator characteristic curve (ROC-AUC). RESULTS Overall, 14,203 patients were included. Of those, 9,755 (68.7%) were assigned to the development cohort, 2,438 (17.2%) to the internal validation cohort, and 2,010 (14.1%) to the temporal validation cohort. A total of 917 (9.4%) patients were admitted to hospital in the development cohort, 210 (8.6%) in the internal validation cohort, and a further 204 (10.1%) in the temporal validation cohort. The model had a ROC-AUC of 0.85 in both the development and validation cohorts. The most predictive factors were older age, male sex, Asian or Other ethnic minority background, obesity, chronic kidney disease, hypertension and diabetes. CONCLUSIONS The COVID-19 hospitalisation risk prediction model demonstrated very good performance and can be used to stratify risk in the Lewisham population to help providers reduce unnecessary hospital admissions and associated costs, improve patient outcomes, and target those at greatest risk to ensure full vaccination against SARS-CoV-2. Further research may examine the external validity of the model in other populations.


2021 ◽  
Author(s):  
Shamil D. Cooray ◽  
Kushan De Silva ◽  
Joanne Enticott ◽  
Shrinkhala Dawadi ◽  
Jacqueline A. Boyle ◽  
...  

ABSTRACTIntroductionThe Monash early pregnancy prediction model calculates risks of developing GDM and is internationally externally validated and implemented in practice, however some gaps remain.ObjectiveTo validate and update Monash GDM model, revising ethnicity categorisation, updating to recent diagnostic criteria, to improve performance and generalisability.MethodsRoutine health data for singleton pregnancies from 2016 to 2018 in Australia included updated GDM diagnostic criteria. The Original Model predictors were included (age, body mass index, ethnicity, diabetes family history, past-history of GDM, past-history of poor obstetric outcomes, ethnicity), with ethnicity revised. Updating model methods were: recalibration-in-the-large (Model A); re-estimation of intercept and slope (Model B), and; coefficients revision using logistic regression (Mode1 C1 with original eight ethnicity categories, and Mode1 C2 with updated 6 ethnicity categories). Analysis included ten-fold cross-validation, performance measures (c-statistic, calibration-in-the-large value, calibration slope and expected-observed (E:O) ratio) and closed testing examining log-likelihood scores and AIC compared models.ResultsIn 26,474 singleton pregnancies (4,756, 18% with GDM), we showed that temporal validation of the original model was reasonable (c-statistic 0.698) but with suboptimal calibration (E:O of 0.485). Model C2 was preferred, because of the high c-statistic (0.732), and it performed significantly better in closed testing compared to other models.ConclusionsUpdating of the original model sustains predictive performance in a contemporary population, including ethnicity data, recent diagnostic criteria, and universal screening context. This supports the value of risk prediction models to guide risk-stratified care to women at risk of GDM.Trial registration detailsThis study was registered as part of the PeRSonal GDM study on the Australian and New Zealand Clinical Trials Registry (ACTRN12620000915954); Pre-results.


2021 ◽  
Vol 8 ◽  
Author(s):  
Evgeny A. Bakin ◽  
Oksana V. Stanevich ◽  
Mikhail P. Chmelevsky ◽  
Vasily A. Belash ◽  
Anastasia A. Belash ◽  
...  

Purpose: The aim of this research is to develop an accurate and interpretable aggregated score not only for hospitalization outcome prediction (death/discharge) but also for the daily assessment of the COVID-19 patient's condition.Patients and Methods: In this single-center cohort study, real-world data collected within the first two waves of the COVID-19 pandemic was used (27.04.2020–03.08.2020 and 01.11.2020–19.01.2021, respectively). The first wave data (1,349 cases) was used as a training set for the score development, while the second wave data (1,453 cases) was used as a validation set. No overlapping cases were presented in the study. For all the available patients' features, we tested their association with an outcome. Significant features were taken for further analysis, and their partial sensitivity, specificity, and promptness were estimated. Sensitivity and specificity were further combined into a feature informativeness index. The developed score was derived as a weighted sum of nine features that showed the best trade-off between informativeness and promptness.Results: Based on the training cohort (median age ± median absolute deviation 58 ± 13.3, females 55.7%), the following resulting score was derived: APTT (4 points), CRP (3 points), D-dimer (4 points), glucose (4 points), hemoglobin (3 points), lymphocytes (3 points), total protein (6 points), urea (5 points), and WBC (4 points). Internal and temporal validation based on the second wave cohort (age 60 ± 14.8, females 51.8%) showed that a sensitivity and a specificity over 90% may be achieved with an expected prediction range of more than 7 days. Moreover, we demonstrated high robustness of the score to the varying peculiarities of the pandemic.Conclusions: An extensive application of the score during the pandemic showed its potential for optimization of patient management as well as improvement of medical staff attentiveness in a high workload stress. The transparent structure of the score, as well as tractable cutoff bounds, simplified its implementation into clinical practice. High cumulative informativeness of the nine score components suggests that these are the indicators that need to be monitored regularly during the follow-up of a patient with COVID-19.


2021 ◽  
pp. sextrans-2021-055222
Author(s):  
Hui Chen ◽  
Rusi Long ◽  
Tian Hu ◽  
Yaqi Chen ◽  
Rongxi Wang ◽  
...  

ObjectivesSuboptimal adherence to antiretroviral therapy (ART) dramatically hampers the achievement of the UNAIDS HIV treatment targets. This study aimed to develop a theory-informed predictive model for ART adherence based on data from Chinese.MethodsA cross-sectional study was conducted in Shenzhen, China, in December 2020. Participants were recruited through snowball sampling, completing a survey that included sociodemographic characteristics, HIV clinical information, Information-Motivation-Behavioural Skills (IMB) constructs and adherence to ART. CD4 counts and HIV viral load were extracted from medical records. A model to predict ART adherence was developed from a multivariable logistic regression with significant predictors selected by Least Absolute Shrinkage and Selection Operator (LASSO) regression. To evaluate the performance of the model, we tested the discriminatory capacity using the concordance index (C-index) and calibration accuracy using the Hosmer and Lemeshow test.ResultsThe average age of the 651 people living with HIV (PLHIV) in the training group was 34.1±8.4 years, with 20.1% reporting suboptimal adherence. The mean age of the 276 PLHIV in the validation group was 33.9±8.2 years, and the prevalence of poor adherence was 22.1%. The suboptimal adherence model incorporates five predictors: education level, alcohol use, side effects, objective abilities and self-efficacy. Constructed by those predictors, the model showed a C-index of 0.739 (95% CI 0.703 to 0.772) in internal validation, which was confirmed be 0.717 via bootstrapping validation and remained modest in temporal validation (C-index 0.676). The calibration capacity was acceptable both in the training and in the validation groups (p>0.05).ConclusionsOur model accurately estimates ART adherence behaviours. The prediction tool can help identify individuals at greater risk for poor adherence and guide tailored interventions to optimise adherence.


Sign in / Sign up

Export Citation Format

Share Document