Diagnostic and Prognostic Research
Latest Publications


TOTAL DOCUMENTS

110
(FIVE YEARS 66)

H-INDEX

10
(FIVE YEARS 5)

Published By Springer (Biomed Central Ltd.)

2397-7523

2022 ◽  
Vol 6 (1) ◽  
Author(s):  
Artuur M. Leeuwenberg ◽  
Maarten van Smeden ◽  
Johannes A. Langendijk ◽  
Arjen van der Schaaf ◽  
Murielle E. Mauer ◽  
...  

Abstract Background Clinical prediction models are developed widely across medical disciplines. When predictors in such models are highly collinear, unexpected or spurious predictor-outcome associations may occur, thereby potentially reducing face-validity of the prediction model. Collinearity can be dealt with by exclusion of collinear predictors, but when there is no a priori motivation (besides collinearity) to include or exclude specific predictors, such an approach is arbitrary and possibly inappropriate. Methods We compare different methods to address collinearity, including shrinkage, dimensionality reduction, and constrained optimization. The effectiveness of these methods is illustrated via simulations. Results In the conducted simulations, no effect of collinearity was observed on predictive outcomes (AUC, R2, Intercept, Slope) across methods. However, a negative effect of collinearity on the stability of predictor selection was found, affecting all compared methods, but in particular methods that perform strong predictor selection (e.g., Lasso). Methods for which the included set of predictors remained most stable under increased collinearity were Ridge, PCLR, LAELR, and Dropout. Conclusions Based on the results, we would recommend refraining from data-driven predictor selection approaches in the presence of high collinearity, because of the increased instability of predictor selection, even in relatively high events-per-variable settings. The selection of certain predictors over others may disproportionally give the impression that included predictors have a stronger association with the outcome than excluded predictors.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
K. Hemming ◽  
M. Taljaard

AbstractClinical prediction models are developed with the ultimate aim of improving patient outcomes, and are often turned into prediction rules (e.g. classifying people as low/high risk using cut-points of predicted risk) at some point during the development stage. Prediction rules often have reasonable ability to either rule-in or rule-out disease (or another event), but rarely both. When a prediction model is intended to be used as a prediction rule, conveying its performance using the C-statistic, the most commonly reported model performance measure, does not provide information on the magnitude of the trade-offs. Yet, it is important that these trade-offs are clear, for example, to health professionals who might implement the prediction rule. This can be viewed as a form of knowledge translation. When communicating information on trade-offs to patients and the public there is a large body of evidence that indicates natural frequencies are most easily understood, and one particularly well-received way of depicting the natural frequency information is to use population diagrams. There is also evidence that health professionals benefit from information presented in this way.Here we illustrate how the implications of the trade-offs associated with prediction rules can be more readily appreciated when using natural frequencies. We recommend that the reporting of the performance of prediction rules should (1) present information using natural frequencies across a range of cut-points to inform the choice of plausible cut-points and (2) when the prediction rule is recommended for clinical use at a particular cut-point the implications of the trade-offs are communicated using population diagrams. Using two existing prediction rules, we illustrate how these methods offer a means of effectively and transparently communicating essential information about trade-offs associated with prediction rules.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Melody Ni ◽  
Mina E. Adam ◽  
Fatima Akbar ◽  
Jeremy R. Huddy ◽  
Simone Borsci ◽  
...  

Abstract Background NG (nasogastric) tubes are used worldwide as a means to provide enteral nutrition. Testing the pH of tube aspirates prior to feeding is commonly used to verify tube location before feeding or medication. A pH at or lower than 5.5 was taken as evidence for stomach intubation. However, the existing standard pH strips lack sensitivity, especially in patients receiving feeding and antacids medication. We developed and validated a first-generation ester-impregnated pH strip test to improve the accuracy towards gastric placements in adult population receiving routine NG-tube feeding. The sensitivity was improved by its augmentation with the action of human gastric lipase (HGL), an enzyme specific to the stomach. Methods We carried out a multi-centred, prospective, two-gate diagnostic accuracy study on patients who require routine NG-tube feeding in 10 NHS hospitals comparing the sensitivity of the novel pH strip to the standard pH test, using either chest X-rays or, in its absence, clinical observation of the absence of adverse events as the reference standard. We also tested the novel pH strips in lung aspirates from patients undergoing oesophageal cancer surgeries using visual inspection as the reference standard. We simulated health economics using a decision analytic model and carried out adoption studies to understand its route to commercialisation. The primary end point is the sensitivity of novel and standard pH tests at the recommended pH cut-off of 5.5. Results A total of 6400 ester-impregnated pH strips were prepared based on an ISO13485 quality management system. A total of 376 gastric samples were collected from adult patients in 10 NHS hospitals who were receiving routine NG-tube feeding. The sensitivities of the standard and novel pH tests were respectively 49.2% (95% CI 44.1‑54.3%) and 70.2% (95% CI 65.6‑74.8%) under pH cut-off of 5.5 and the novel test has a lung specificity of 89.5% (95% CI 79.6%, 99.4%). Our simulation showed that using the novel test can potentially save 132 unnecessary chest X-rays per check per every 1000 eligible patients, or direct savings of £4034 to the NHS. Conclusions The novel pH test correctly identified significantly more patients with tubes located inside the stomach compared to the standard pH test used widely by the NHS. Trial registration http://www.isrctn.com/ISRCTN11170249, Registered 21 June 2017—retrospectively registered


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Orouba Almilaji ◽  
Gwilym Webb ◽  
Alec Maynard ◽  
Thomas P. Chapman ◽  
Brian S. F. Shine ◽  
...  

Abstract Background Using two large datasets from Dorset, we previously reported an internally validated multivariable risk model for predicting the risk of GI malignancy in IDA—the IDIOM score. The aim of this retrospective observational study was to validate the IDIOM model using two independent external datasets. Methods The external validation datasets were collected, in a secondary care setting, by different investigators from cohorts in Oxford and Sheffield derived under different circumstances, comprising 1117 and 474 patients with confirmed IDA respectively. The data were anonymised prior to analysis. The predictive performance of the original model was evaluated by estimating measures of calibration, discrimination and clinical utility using the validation datasets. Results The discrimination of the original model using the external validation data was 70% (95% CI 65, 75) for the Oxford dataset and 70% (95% CI 61, 79) for the Sheffield dataset. The analysis of mean, weak, flexible and across the risk groups’ calibration showed no tendency for under or over-estimated risks in the combined validation data. Decision curve analysis demonstrated the clinical value of the IDIOM model with a net benefit that is higher than ‘investigate all’ and ‘investigate no-one’ strategies up to a threshold of 18% in the combined validation data, using a risk cut-off of around 1.2% to categorise patients into the very low risk group showed that none of the patients stratified in this risk group proved to have GI cancer on investigation in the validation datasets. Conclusion This external validation exercise has shown promising results for the IDIOM model in predicting the risk of underlying GI malignancy in independent IDA datasets collected in different clinical settings.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Jared Gresh ◽  
Harold Kisner ◽  
Brian DuChateau

Abstract Background Testing individuals suspected of severe acute respiratory syndrome–like coronavirus 2 (SARS-CoV-2) infection is essential to reduce the spread of disease. The purpose of this retrospective study was to determine the false negativity rate of the LumiraDx SARS-CoV-2 Ag Test when utilized for testing individuals suspected of SARS-CoV-2 infection. Methods Concurrent swab samples were collected from patients suspected of SARS-CoV-2 infection by their healthcare provider within two different urgent care centers located in Easton, MA, USA and East Bridgewater, MA, USA. One swab was tested using the LumiraDx SARS-CoV-2 Ag Test. Negative results in patients considered at moderate to high risk of SARS-CoV-2 infection were confirmed at a regional reference laboratory by polymerase chain reaction (PCR) using the additional swab sample. The data included in this study was collected retrospectively as an analysis of routine clinical practice. Results From October 19, 2020 to January 3, 2021, a total of 2241 tests were performed using the LumiraDx SARS-CoV-2 Ag Test, with 549 (24.5%) testing positive and 1692 (75.5%) testing negative. A subset (800) of the samples rendering a negative LumiraDx SARS-CoV-2 Ag Test was also tested using a PCR-based test for SARS-CoV-2. Of this subset, 770 (96.3%) tested negative, and 30 (3.8%) tested positive. Negative results obtained with the LumiraDx SARS-CoV-2 Ag test demonstrated 96.3% agreement with PCR-based tests (CI 95%, 94.7–97.4%). A cycle threshold (CT) was available for 17 of the 30 specimens that yielded discordant results, with an average CT value of 31.2, an SD of 3.0, and a range of 25.2–36.3. CT was > 30.0 in 11/17 specimens (64.7%). Conclusions This study demonstrates that the LumiraDx SARS-CoV-2 Ag Test had a low false-negative rate of 3.8% when used in a community-based setting.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Erin M. Schnellinger ◽  
Wei Yang ◽  
Stephen E. Kimmel

Abstract Background Prediction models inform many medical decisions, but their performance often deteriorates over time. Several discrete-time update strategies have been proposed in the literature, including model recalibration and revision. However, these strategies have not been compared in the dynamic updating setting. Methods We used post-lung transplant survival data during 2010-2015 and compared the Brier Score (BS), discrimination, and calibration of the following update strategies: (1) never update, (2) update using the closed testing procedure proposed in the literature, (3) always recalibrate the intercept, (4) always recalibrate the intercept and slope, and (5) always refit/revise the model. In each case, we explored update intervals of every 1, 2, 4, and 8 quarters. We also examined how the performance of the update strategies changed as the amount of old data included in the update (i.e., sliding window length) increased. Results All methods of updating the model led to meaningful improvement in BS relative to never updating. More frequent updating yielded better BS, discrimination, and calibration, regardless of update strategy. Recalibration strategies led to more consistent improvements and less variability over time compared to the other updating strategies. Using longer sliding windows did not substantially impact the recalibration strategies, but did improve the discrimination and calibration of the closed testing procedure and model revision strategies. Conclusions Model updating leads to improved BS, with more frequent updating performing better than less frequent updating. Model recalibration strategies appeared to be the least sensitive to the update interval and sliding window length.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Mariella Gregorich ◽  
Andreas Heinzel ◽  
Michael Kammer ◽  
Heike Meiselbach ◽  
Carsten Böger ◽  
...  

Abstract Background Chronic kidney disease (CKD) is a well-established complication in people with diabetes mellitus. Roughly one quarter of prevalent patients with diabetes exhibit a CKD stage of 3 or higher and the individual course of progression is highly variable. Therefore, there is a clear need to identify patients at high risk for fast progression and the implementation of preventative strategies. Existing prediction models of renal function decline, however, aim to assess the risk by artificially grouped patients prior to model building into risk strata defined by the categorization of the least-squares slope through the longitudinally fluctuating eGFR values, resulting in a loss of predictive precision and accuracy. Methods This study protocol describes the development and validation of a prediction model for the longitudinal progression of renal function decline in Caucasian patients with type 2 diabetes mellitus (DM2). For development and internal-external validation, two prospective multicenter observational studies will be used (PROVALID and GCKD). The estimated glomerular filtration rate (eGFR) obtained at baseline and at all planned follow-up visits will be the longitudinal outcome. Demographics, clinical information and laboratory measurements available at a baseline visit will be used as predictors in addition to random country-specific intercepts to account for the clustered data. A multivariable mixed-effects model including the main effects of the clinical variables and their interactions with time will be fitted. In application, this model can be used to obtain personalized predictions of an eGFR trajectory conditional on baseline eGFR values. The final model will then undergo external validation using a third prospective cohort (DIACORE). The final prediction model will be made publicly available through the implementation of an R shiny web application. Discussion Our proposed state-of-the-art methodology will be developed using multiple multicentre study cohorts of people with DM2 in various CKD stages at baseline, who have received modern therapeutic treatment strategies of diabetic kidney disease in contrast to previous models. Hence, we anticipate that the multivariable prediction model will aid as an additional informative tool to determine the patient-specific progression of renal function and provide a useful guide to early on identify individuals with DM2 at high risk for rapid progression.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Jamie Miles ◽  
Richard Jacques ◽  
Janette Turner ◽  
Suzanne Mason

Abstract Background Demand for both the ambulance service and the emergency department (ED) is rising every year and when this demand is excessive in both systems, ambulance crews queue at the ED waiting to hand patients over. Some transported ambulance patients are ‘low-acuity’ and do not require the treatment of the ED. However, paramedics can find it challenging to identify these patients accurately. Decision support tools have been developed using expert opinion to help identify these low acuity patients but have failed to show a benefit beyond regular decision-making. Predictive algorithms may be able to build accurate models, which can be used in the field to support the decision not to take a low-acuity patient to an ED. Methods and analysis All patients in Yorkshire who were transported to the ED by ambulance between July 2019 and February 2020 will be included. Ambulance electronic patient care record (ePCR) clinical data will be used as candidate predictors for the model. These will then be linked to the corresponding ED record, which holds the outcome of a ‘non-urgent attendance’. The estimated sample size is 52,958, with 4767 events and an EPP of 7.48. An XGBoost algorithm will be used for model development. Initially, a model will be derived using all the data and the apparent performance will be assessed. Then internal-external validation will use non-random nested cross-validation (CV) with test sets held out for each ED (spatial validation). After all models are created, a random-effects meta-analysis will be undertaken. This will pool performance measures such as goodness of fit, discrimination and calibration. It will also generate a prediction interval and measure heterogeneity between clusters. The performance of the full model will be updated with the pooled results. Discussion Creating a risk prediction model in this area will lead to further development of a clinical decision support tool that ensures every ambulance patient can get to the right place of care, first time. If this study is successful, it could help paramedics evaluate the benefit of transporting a patient to the ED before they leave the scene. It could also reduce congestion in the urgent and emergency care system. Trial Registration This study was retrospectively registered with the ISRCTN: 12121281


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Charles Reynard ◽  
Glen P. Martin ◽  
Evangelos Kontopantelis ◽  
David A. Jenkins ◽  
Anthony Heagerty ◽  
...  

Abstract Background Patients presenting with chest pain represent a large proportion of attendances to emergency departments. In these patients clinicians often consider the diagnosis of acute myocardial infarction (AMI), the timely recognition and treatment of which is clinically important. Clinical prediction models (CPMs) have been used to enhance early diagnosis of AMI. The Troponin-only Manchester Acute Coronary Syndromes (T-MACS) decision aid is currently in clinical use across Greater Manchester. CPMs have been shown to deteriorate over time through calibration drift. We aim to assess potential calibration drift with T-MACS and compare methods for updating the model. Methods We will use routinely collected electronic data from patients who were treated using TMACS at two large NHS hospitals. This is estimated to include approximately 14,000 patient episodes spanning June 2016 to October 2020. The primary outcome of acute myocardial infarction will be sourced from NHS Digital’s admitted patient care dataset. We will assess the calibration drift of the existing model and the benefit of updating the CPM by model recalibration, model extension and dynamic updating. These models will be validated by bootstrapping and one step ahead prequential testing. We will evaluate predictive performance using calibrations plots and c-statistics. We will also examine the reclassification of predicted probability with the updated TMACS model. Discussion CPMs are widely used in modern medicine, but are vulnerable to deteriorating calibration over time. Ongoing refinement using routinely collected electronic data will inevitably be more efficient than deriving and validating new models. In this analysis we will seek to exemplify methods for updating CPMs to protect the initial investment of time and effort. If successful, the updating methods could be used to continually refine the algorithm used within TMACS, maintaining or even improving predictive performance over time. Trial registration ISRCTN number: ISRCTN41008456


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
F. S. van Royen ◽  
M. van Smeden ◽  
K. G. M. Moons ◽  
F. H. Rutten ◽  
G. J. Geersing

Abstract Background Superficial venous thrombosis (SVT) is considered a benign thrombotic condition in most patients. However, it also can cause serious complications, such as clot progression to deep venous thrombosis (DVT) and pulmonary embolism (PE). Although most SVT patients are encountered in primary healthcare, studies on SVT nearly all were focused on patients seen in the hospital setting. This paper describes the protocol of the development and external validation of three prognostic prediction models for relevant clinical outcomes in SVT patients seen in primary care: (i) prolonged (painful) symptoms within 14 days since SVT diagnosis, (ii) for clot progression to DVT or PE within 45 days and (iii) for clot recurrence within 12 months. Methods Data will be used from four primary care routine healthcare registries from both the Netherlands and the UK; one UK registry will be used for the development of the prediction models and the remaining three will be used as external validation cohorts. The study population will consist of patients ≥18 years with a diagnosis of SVT. Selection of SVT cases will be based on a combination of ICPC/READ/Snowmed coding and free text clinical symptoms. Predictors considered are sex, age, body mass index, clinical SVT characteristics, and co-morbidities including (history of any) cardiovascular disease, diabetes, autoimmune disease, malignancy, thrombophilia, pregnancy or puerperium and presence of varicose veins. The prediction models will be developed using multivariable logistic regression analysis techniques for models i and ii, and for model iii, a Cox proportional hazards model will be used. They will be validated by internal-external cross-validation as well as external validation. Discussion There are currently no prediction models available for predicting the risk of serious complications for SVT patients presenting in primary care settings. We aim to develop and validate new prediction models that should help identify patients at highest risk for complications and to support clinical decision making for this understudied thrombo-embolic disorder. Challenges that we anticipate to encounter are mostly related to performing research in large, routine healthcare databases, such as patient selection, endpoint classification, data harmonisation, missing data and avoiding (predictor) measurement heterogeneity.


Sign in / Sign up

Export Citation Format

Share Document