scholarly journals The Stunting Tool for Early Prevention: development and external validation of a novel tool to predict risk of stunting in children at 3 years of age

2019 ◽  
Vol 4 (6) ◽  
pp. e001801
Author(s):  
Sarah Hanieh ◽  
Sabine Braat ◽  
Julie A Simpson ◽  
Tran Thi Thu Ha ◽  
Thach D Tran ◽  
...  

IntroductionGlobally, an estimated 151 million children under 5 years of age still suffer from the adverse effects of stunting. We sought to develop and externally validate an early life predictive model that could be applied in infancy to accurately predict risk of stunting in preschool children.MethodsWe conducted two separate prospective cohort studies in Vietnam that intensively monitored children from early pregnancy until 3 years of age. They included 1168 and 475 live-born infants for model development and validation, respectively. Logistic regression on child stunting at 3 years of age was performed for model development, and the predicted probabilities for stunting were used to evaluate the performance of this model in the validation data set.ResultsStunting prevalence was 16.9% (172 of 1015) in the development data set and 16.4% (70 of 426) in the validation data set. Key predictors included in the final model were paternal and maternal height, maternal weekly weight gain during pregnancy, infant sex, gestational age at birth, and infant weight and length at 6 months of age. The area under the receiver operating characteristic curve in the validation data set was 0.85 (95% Confidence Interval, 0.80–0.90).ConclusionThis tool applied to infants at 6 months of age provided valid prediction of risk of stunting at 3 years of age using a readily available set of parental and infant measures. Further research is required to examine the impact of preventive measures introduced at 6 months of age on those identified as being at risk of growth faltering at 3 years of age.

BMJ Open ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. e040778
Author(s):  
Vineet Kumar Kamal ◽  
Ravindra Mohan Pandey ◽  
Deepak Agrawal

ObjectiveTo develop and validate a simple risk scores chart to estimate the probability of poor outcomes in patients with severe head injury (HI).DesignRetrospective.SettingLevel-1, government-funded trauma centre, India.ParticipantsPatients with severe HI admitted to the neurosurgery intensive care unit during 19 May 2010–31 December 2011 (n=946) for the model development and further, data from same centre with same inclusion criteria from 1 January 2012 to 31 July 2012 (n=284) for the external validation of the model.Outcome(s)In-hospital mortality and unfavourable outcome at 6 months.ResultsA total of 39.5% and 70.7% had in-hospital mortality and unfavourable outcome, respectively, in the development data set. The multivariable logistic regression analysis of routinely collected admission characteristics revealed that for in-hospital mortality, age (51–60, >60 years), motor score (1, 2, 4), pupillary reactivity (none), presence of hypotension, basal cistern effaced, traumatic subarachnoid haemorrhage/intraventricular haematoma and for unfavourable outcome, age (41–50, 51–60, >60 years), motor score (1–4), pupillary reactivity (none, one), unequal limb movement, presence of hypotension were the independent predictors as its 95% confidence interval (CI) of odds ratio (OR)_did not contain one. The discriminative ability (area under the receiver operating characteristic curve (95% CI)) of the score chart for in-hospital mortality and 6 months outcome was excellent in the development data set (0.890 (0.867 to 912) and 0.894 (0.869 to 0.918), respectively), internal validation data set using bootstrap resampling method (0.889 (0.867 to 909) and 0.893 (0.867 to 0.915), respectively) and external validation data set (0.871 (0.825 to 916) and 0.887 (0.842 to 0.932), respectively). Calibration showed good agreement between observed outcome rates and predicted risks in development and external validation data set (p>0.05).ConclusionFor clinical decision making, we can use of these score charts in predicting outcomes in new patients with severe HI in India and similar settings.


2017 ◽  
Vol 21 (18) ◽  
pp. 1-100 ◽  
Author(s):  
Shakila Thangaratinam ◽  
John Allotey ◽  
Nadine Marlin ◽  
Ben W Mol ◽  
Peter Von Dadelszen ◽  
...  

BackgroundThe prognosis of early-onset pre-eclampsia (before 34 weeks’ gestation) is variable. Accurate prediction of complications is required to plan appropriate management in high-risk women.ObjectiveTo develop and validate prediction models for outcomes in early-onset pre-eclampsia.DesignProspective cohort for model development, with validation in two external data sets.SettingModel development: 53 obstetric units in the UK. Model transportability: PIERS (Pre-eclampsia Integrated Estimate of RiSk for mothers) and PETRA (Pre-Eclampsia TRial Amsterdam) studies.ParticipantsPregnant women with early-onset pre-eclampsia.Sample sizeNine hundred and forty-six women in the model development data set and 850 women (634 in PIERS, 216 in PETRA) in the transportability (external validation) data sets.PredictorsThe predictors were identified from systematic reviews of tests to predict complications in pre-eclampsia and were prioritised by Delphi survey.Main outcome measuresThe primary outcome was the composite of adverse maternal outcomes established using Delphi surveys. The secondary outcome was the composite of fetal and neonatal complications.AnalysisWe developed two prediction models: a logistic regression model (PREP-L) to assess the overall risk of any maternal outcome until postnatal discharge and a survival analysis model (PREP-S) to obtain individual risk estimates at daily intervals from diagnosis until 34 weeks. Shrinkage was used to adjust for overoptimism of predictor effects. For internal validation (of the full models in the development data) and external validation (of the reduced models in the transportability data), we computed the ability of the models to discriminate between those with and without poor outcomes (c-statistic), and the agreement between predicted and observed risk (calibration slope).ResultsThe PREP-L model included maternal age, gestational age at diagnosis, medical history, systolic blood pressure, urine protein-to-creatinine ratio, platelet count, serum urea concentration, oxygen saturation, baseline treatment with antihypertensive drugs and administration of magnesium sulphate. The PREP-S model additionally included exaggerated tendon reflexes and serum alanine aminotransaminase and creatinine concentration. Both models showed good discrimination for maternal complications, with anoptimism-adjustedc-statistic of 0.82 [95% confidence interval (CI) 0.80 to 0.84] for PREP-L and 0.75 (95% CI 0.73 to 0.78) for the PREP-S model in the internal validation. External validation of the reduced PREP-L model showed good performance with ac-statistic of 0.81 (95% CI 0.77 to 0.85) in PIERS and 0.75 (95% CI 0.64 to 0.86) in PETRA cohorts for maternal complications, and calibrated well with slopes of 0.93 (95% CI 0.72 to 1.10) and 0.90 (95% CI 0.48 to 1.32), respectively. In the PIERS data set, the reduced PREP-S model had ac-statistic of 0.71 (95% CI 0.67 to 0.75) and a calibration slope of 0.67 (95% CI 0.56 to 0.79). Low gestational age at diagnosis, high urine protein-to-creatinine ratio, increased serum urea concentration, treatment with antihypertensive drugs, magnesium sulphate, abnormal uterine artery Doppler scan findings and estimated fetal weight below the 10th centile were associated with fetal complications.ConclusionsThe PREP-L model provided individualised risk estimates in early-onset pre-eclampsia to plan management of high- or low-risk individuals. The PREP-S model has the potential to be used as a triage tool for risk assessment. The impacts of the model use on outcomes need further evaluation.Trial registrationCurrent Controlled Trials ISRCTN40384046.FundingThe National Institute for Health Research Health Technology Assessment programme.


Author(s):  
Armin Hadadian ◽  
Sairam Prabhakar ◽  
Bjorn Sjodin ◽  
Keith Taylor

Predictive lifing with probabilistic treatment of key variables represents a promising approach to realizing the digital gas turbine of the future. In this paper, we present a predictive model for creep life assessment of an uncooled turbine blade. The model development methodology draws on well-established machine learning principles to develop and validate a surrogate model for creep life from engine performance parameters. Verified creep life results, obtained from 3D non-linear thermo-mechanical finite element simulation for varying engine operating conditions are used as the basis for model development. The selection of model response surface order is studied over a range of models by evaluating normalized residual error on training and uncorrelated validation data sets. A model that is fully quadratic in the data set features is shown to have excellent predictive capability, yielding nominal creep life predictions to within ± 3% on the validation data set. This work then considers probabilistic techniques to evaluate the impact of uncertainty associated with each key factor on the predicted nominal creep life in order to achieve a mandated life target with a defined probability of failure.


2019 ◽  
Vol 7 (1) ◽  
Author(s):  
Warren Mukelabai Simangolwa

Appropriate open defecation free (ODF) sustainability interventions are key to further mobilise communities to consume sanitation and hygiene products and services that enhance household’s quality of life and embed household behavioural change for heathier communities. This study aims to develop a logistic regression derived risk algorithm to estimate a 12-month ODF slippage risk and externally validate the model in an independent data set. ODF slippage occurs when one or more toilet adequacy parameters are no longer present for one or more toilets in a community. Data in the Zambia district health information software for water sanitation and hygiene management information system for Chungu and Chabula chiefdoms was used for the study. The data was retrieved from the date of chief Chungu and Chabula chiefdoms' attainment of ODF status in October 2016 for 12 months until September 2017 for the development and validation data sets respectively. Data was assumed to be missing completely at random and the complete case analysis approach was used. The events per variables were satisfactory for both the development and validation data sets. Multivariable regression with a backwards selection procedure was used to decide candidate predictor variables with p < 0.05 meriting inclusion. To correct for optimism, the study compared amount of heuristic shrinkage by comparing the model’s apparent C-statistic to the C- statistic computed by nonparametric bootstrap resampling. In the resulting model, an increase in the covariates ‘months after ODF attainment’, ‘village population’ and ‘latrine built after CLTS’, were all associated with a higher probability of ODF slippage. Conversely, an increase in the covariate ‘presence of a handwashing station with soap’, was associated with reduced probability of ODF slippage. The predictive performance of the model was improved by the heuristic shrinkage factor of 0.988. The external validation confirmed good prediction performance with an area under the receiver operating characteristic curve of 0.85 and no significant lack of fit (Hosmer-Lemeshow test: p = 0.246). The results must be interpreted with caution in regions where the ODF definitions, culture and other factors are different from those asserted in the study.


2020 ◽  
Author(s):  
Yong Li

BACKGROUND Coronary heart disease, including ST elevation myocardial infarction(STEMI), was still the leading cause of mortality. OBJECTIVE The objective of our study was to develop and externally validate a diagnostic model of in-hospital mortality in the patients with acute STEMI . METHODS Design: Multivariable logistic regression of a cohort of hospitalized patients with acute STEMI. Setting: Emergency department ward of a university hospital. Participants: Diagnostic model development: A total of 2,183 hospitalized patients with acute STEMI from January 2002 to December 2011.External validation: A total of 7,485 hospitalized patients with acute STEMI from January 2012 to August 2019. Outcomes: In-hospital mortality. All cause in-hospital mortality was defined as cardiac or non-cardiac death during hospitalization. We used logistic regression analysis to analyze the risk factors of in-hospital mortality in the development data set. We developed a diagnostic model of in-hospital mortality and constructed a nomogram.We assessed the predictive performance of the diagnostic model in the validation data sets by examining measures of discrimination, calibration, and decision curve analysis (DCA). RESULTS In-hospital mortality occurred in 61out of 2,183 participants (2.8%) in the development data set. The strongest predictors of in-hospital mortality were age and Killip classification. We developed a diagnostic model of in-hospital mortality .The area under the receiver operating characteristic (ROC) curve (AUC) was .9126±.0166, 95% confidence interval(CI)= .88015 ~ .94504 in the development set .We constructed a nomogram based on age and Killip classification. In-hospital mortality occurred in 127 out of 7,485 participants(1.7%) in the validation data set. The AUC was .9305±.0113, 95% CI= .90827 ~ .95264 in the validation set . Discrimination, calibration ,and DCA were satisfactory. Date of approved by ethic committee:25 October 2019. Date of data collection start: 6 November 2019. Numbers recruited as of submission of the manuscript:9,668. CONCLUSIONS Conclusions: We developed and externally validated a diagnostic model of in-hospital mortality in patient with acute STEMI . CLINICALTRIAL We registered this study with WHO International Clinical Trials Registry Platform (ICTRP) (registration number: ChiCTR1900027129; registered date: 1 November 2019). http://www.chictr.org.cn/edit.aspx?pid=44888&htm=4.


Author(s):  
M. R. W. Brake ◽  
P. L. Reu ◽  
D. S. Aragon

The results of two sets of impact experiments are reported within. To assist with model development using the impact data reported, the materials are mechanically characterized using a series of standard experiments. The first set of impact data comes from a series of coefficient of restitution (COR) experiments, in which a 2 m long pendulum is used to study “in-context” measurements of the coefficient of restitution for eight different materials (6061-T6 aluminum, phosphor bronze alloy 510, Hiperco, nitronic 60A, stainless steel 304, titanium, copper, and annealed copper). The coefficient of restitution is measured via two different techniques: digital image correlation (DIC) and laser Doppler vibrometry (LDV). Due to the strong agreement of the two different methods, only results from the digital image correlation are reported. The coefficient of restitution experiments are in context as the scales of the geometry and impact velocities are representative of common features in the motivating application for this research. Finally, a series of compliance measurements are detailed for the same set of materials. The compliance measurements are conducted using both nano-indentation and micro-indentation machines, providing sub-nm displacement resolution and μN force resolution. Good agreement is seen for load levels spanned by both machines. As the transition from elastic to plastic behavior occurs at contact displacements on the order of 30 nm, this data set provides a unique insight into the transitionary region.


2019 ◽  
Author(s):  
Guangzhi Wang ◽  
Huihui Wan ◽  
Xingxing Jian ◽  
Yuyu Li ◽  
Jian Ouyang ◽  
...  

AbstractIn silico T-cell epitope prediction plays an important role in immunization experimental design and vaccine preparation. Currently, most epitope prediction research focuses on peptide processing and presentation, e.g. proteasomal cleavage, transporter associated with antigen processing (TAP) and major histocompatibility complex (MHC) combination. To date, however, the mechanism for immunogenicity of epitopes remains unclear. It is generally agreed upon that T-cell immunogenicity may be influenced by the foreignness, accessibility, molecular weight, molecular structure, molecular conformation, chemical properties and physical properties of target peptides to different degrees. In this work, we tried to combine these factors. Firstly, we collected significant experimental HLA-I T-cell immunogenic peptide data, as well as the potential immunogenic amino acid properties. Several characteristics were extracted, including amino acid physicochemical property of epitope sequence, peptide entropy, eluted ligand likelihood percentile rank (EL rank(%)) score and frequency score for immunogenic peptide. Subsequently, a random forest classifier for T cell immunogenic HLA-I presenting antigen epitopes and neoantigens was constructed. The classification results for the antigen epitopes outperformed the previous research (the optimal AUC=0.81, external validation data set AUC=0.77). As mutational epitopes generated by the coding region contain only the alterations of one or two amino acids, we assume that these characteristics might also be applied to the classification of the endogenic mutational neoepitopes also called ‘neoantigens’. Based on mutation information and sequence related amino acid characteristics, a prediction model of neoantigen was established as well (the optimal AUC=0.78). Further, an easy-to-use web-based tool ‘INeo-Epp’ was developed (available at http://www.biostatistics.online/INeo-Epp/neoantigen.php)for the prediction of human immunogenic antigen epitopes and neoantigen epitopes.


Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Jose M. Castillo T. ◽  
Muhammad Arif ◽  
Martijn P. A. Starmans ◽  
Wiro J. Niessen ◽  
Chris H. Bangma ◽  
...  

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.


2019 ◽  
Vol 98 (10) ◽  
pp. 1088-1095 ◽  
Author(s):  
J. Krois ◽  
C. Graetz ◽  
B. Holtfreter ◽  
P. Brinkmann ◽  
T. Kocher ◽  
...  

Prediction models learn patterns from available data (training) and are then validated on new data (testing). Prediction modeling is increasingly common in dental research. We aimed to evaluate how different model development and validation steps affect the predictive performance of tooth loss prediction models of patients with periodontitis. Two independent cohorts (627 patients, 11,651 teeth) were followed over a mean ± SD 18.2 ± 5.6 y (Kiel cohort) and 6.6 ± 2.9 y (Greifswald cohort). Tooth loss and 10 patient- and tooth-level predictors were recorded. The impact of different model development and validation steps was evaluated: 1) model complexity (logistic regression, recursive partitioning, random forest, extreme gradient boosting), 2) sample size (full data set or 10%, 25%, or 75% of cases dropped at random), 3) prediction periods (maximum 10, 15, or 20 y or uncensored), and 4) validation schemes (internal or external by centers/time). Tooth loss was generally a rare event (880 teeth were lost). All models showed limited sensitivity but high specificity. Patients’ age and tooth loss at baseline as well as probing pocket depths showed high variable importance. More complex models (random forest, extreme gradient boosting) had no consistent advantages over simpler ones (logistic regression, recursive partitioning). Internal validation (in sample) overestimated the predictive power (area under the curve up to 0.90), while external validation (out of sample) found lower areas under the curve (range 0.62 to 0.82). Reducing the sample size decreased the predictive power, particularly for more complex models. Censoring the prediction period had only limited impact. When the model was trained in one period and tested in another, model outcomes were similar to the base case, indicating temporal validation as a valid option. No model showed higher accuracy than the no-information rate. In conclusion, none of the developed models would be useful in a clinical setting, despite high accuracy. During modeling, rigorous development and external validation should be applied and reported accordingly.


Heart ◽  
2018 ◽  
Vol 104 (23) ◽  
pp. 1921-1928 ◽  
Author(s):  
Ming-Zher Poh ◽  
Yukkee Cheung Poh ◽  
Pak-Hei Chan ◽  
Chun-Ka Wong ◽  
Louise Pun ◽  
...  

ObjectiveTo evaluate the diagnostic performance of a deep learning system for automated detection of atrial fibrillation (AF) in photoplethysmographic (PPG) pulse waveforms.MethodsWe trained a deep convolutional neural network (DCNN) to detect AF in 17 s PPG waveforms using a training data set of 149 048 PPG waveforms constructed from several publicly available PPG databases. The DCNN was validated using an independent test data set of 3039 smartphone-acquired PPG waveforms from adults at high risk of AF at a general outpatient clinic against ECG tracings reviewed by two cardiologists. Six established AF detectors based on handcrafted features were evaluated on the same test data set for performance comparison.ResultsIn the validation data set (3039 PPG waveforms) consisting of three sequential PPG waveforms from 1013 participants (mean (SD) age, 68.4 (12.2) years; 46.8% men), the prevalence of AF was 2.8%. The area under the receiver operating characteristic curve (AUC) of the DCNN for AF detection was 0.997 (95% CI 0.996 to 0.999) and was significantly higher than all the other AF detectors (AUC range: 0.924–0.985). The sensitivity of the DCNN was 95.2% (95% CI 88.3% to 98.7%), specificity was 99.0% (95% CI 98.6% to 99.3%), positive predictive value (PPV) was 72.7% (95% CI 65.1% to 79.3%) and negative predictive value (NPV) was 99.9% (95% CI 99.7% to 100%) using a single 17 s PPG waveform. Using the three sequential PPG waveforms in combination (<1 min in total), the sensitivity was 100.0% (95% CI 87.7% to 100%), specificity was 99.6% (95% CI 99.0% to 99.9%), PPV was 87.5% (95% CI 72.5% to 94.9%) and NPV was 100% (95% CI 99.4% to 100%).ConclusionsIn this evaluation of PPG waveforms from adults screened for AF in a real-world primary care setting, the DCNN had high sensitivity, specificity, PPV and NPV for detecting AF, outperforming other state-of-the-art methods based on handcrafted features.


Sign in / Sign up

Export Citation Format

Share Document