scholarly journals Prediction Model for Timing of Death in Potential Donors After Circulatory Death (DCD III): Protocol for a Multicenter Prospective Observational Cohort Study (Preprint)

2019 ◽  
Author(s):  
Angela M M Kotsopoulos ◽  
Piet Vos ◽  
Nichon E Jansen ◽  
Ewald M Bronkhorst ◽  
Johannes G van der Hoeven ◽  
...  

BACKGROUND Controlled donation after circulatory death (cDCD) is a major source of organs for transplantation. A potential cDCD donor poses considerable challenges in terms of identification of those dying within the predefined time frame of warm ischemia after withdrawal of life-sustaining treatment (WLST) to circulatory arrest. Several attempts have been made to develop models predicting the time between treatment withdrawal and circulatory arrest. This time window determines whether organ donation can occur and influences the quality of the donated organs. However, the selected patients used for these models were not always restricted to potential cDCD donors (eg, patients with cancer or severe infections were also included). This severely limits the generalizability of those data. OBJECTIVE The objectives of this study are the following: (1) to develop a model predicting time to death within 60 minutes in potential cDCD patients; (2) to validate and update previous prediction models on time to death after WLST; (3) to determine timing and patient characteristics that are associated with prognostication and the decision-making process that leads to initiating end-of-life care; (4) to evaluate the impact of timing of family approach on organ donation approval; and (5) to assess the influence of variation in WLST processes on postmortem organ donor potential and actual postmortem organ donors. METHODS In this multicenter observational prospective cohort study, all patients admitted to the intensive care unit of 3 university hospitals and 3 teaching hospitals who met the criteria of the cDCD protocol as defined by the Dutch Transplant Foundation were included. The target of enrolment was set to 400 patients. Previously developed models will be refitted in our data set. To further update previous prediction models, we will apply least absolute shrinkage and selection operator (LASSO) as a tool for efficient variable selection to develop the multivariable logistic regression model. RESULTS This protocol was funded in August 2014 by the Dutch Transplant Foundation. We expect to have the results of this study in July 2020. Patient enrolment was completed in July 2018 and data collection was completed in April 2020. CONCLUSIONS This study will provide a robust multimodal prediction model, based on clinical and physiological parameters, that can predict time to circulatory arrest in cDCD donors. In addition, it will add valuable insight in the process of WLST in cDCD donors and will fill an important knowledge gap in this essential field of health care. CLINICALTRIAL ClinicalTrials.gov NCT04123275; https://clinicaltrials.gov/ct2/show/NCT04123275 INTERNATIONAL REGISTERED REPORT DERR1-10.2196/16733


10.2196/16733 ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. e16733
Author(s):  
Angela M M Kotsopoulos ◽  
Piet Vos ◽  
Nichon E Jansen ◽  
Ewald M Bronkhorst ◽  
Johannes G van der Hoeven ◽  
...  

Background Controlled donation after circulatory death (cDCD) is a major source of organs for transplantation. A potential cDCD donor poses considerable challenges in terms of identification of those dying within the predefined time frame of warm ischemia after withdrawal of life-sustaining treatment (WLST) to circulatory arrest. Several attempts have been made to develop models predicting the time between treatment withdrawal and circulatory arrest. This time window determines whether organ donation can occur and influences the quality of the donated organs. However, the selected patients used for these models were not always restricted to potential cDCD donors (eg, patients with cancer or severe infections were also included). This severely limits the generalizability of those data. Objective The objectives of this study are the following: (1) to develop a model predicting time to death within 60 minutes in potential cDCD patients; (2) to validate and update previous prediction models on time to death after WLST; (3) to determine timing and patient characteristics that are associated with prognostication and the decision-making process that leads to initiating end-of-life care; (4) to evaluate the impact of timing of family approach on organ donation approval; and (5) to assess the influence of variation in WLST processes on postmortem organ donor potential and actual postmortem organ donors. Methods In this multicenter observational prospective cohort study, all patients admitted to the intensive care unit of 3 university hospitals and 3 teaching hospitals who met the criteria of the cDCD protocol as defined by the Dutch Transplant Foundation were included. The target of enrolment was set to 400 patients. Previously developed models will be refitted in our data set. To further update previous prediction models, we will apply least absolute shrinkage and selection operator (LASSO) as a tool for efficient variable selection to develop the multivariable logistic regression model. Results This protocol was funded in August 2014 by the Dutch Transplant Foundation. We expect to have the results of this study in July 2020. Patient enrolment was completed in July 2018 and data collection was completed in April 2020. Conclusions This study will provide a robust multimodal prediction model, based on clinical and physiological parameters, that can predict time to circulatory arrest in cDCD donors. In addition, it will add valuable insight in the process of WLST in cDCD donors and will fill an important knowledge gap in this essential field of health care. Trial Registration ClinicalTrials.gov NCT04123275; https://clinicaltrials.gov/ct2/show/NCT04123275 International Registered Report Identifier (IRRID) DERR1-10.2196/16733



2020 ◽  
Vol 4 (4) ◽  
pp. 33
Author(s):  
Toni Pano ◽  
Rasha Kashef

During the COVID-19 pandemic, many research studies have been conducted to examine the impact of the outbreak on the financial sector, especially on cryptocurrencies. Social media, such as Twitter, plays a significant role as a meaningful indicator in forecasting the Bitcoin (BTC) prices. However, there is a research gap in determining the optimal preprocessing strategy in BTC tweets to develop an accurate machine learning prediction model for bitcoin prices. This paper develops different text preprocessing strategies for correlating the sentiment scores of Twitter text with Bitcoin prices during the COVID-19 pandemic. We explore the effect of different preprocessing functions, features, and time lengths of data on the correlation results. Out of 13 strategies, we discover that splitting sentences, removing Twitter-specific tags, or their combination generally improve the correlation of sentiment scores and volume polarity scores with Bitcoin prices. The prices only correlate well with sentiment scores over shorter timespans. Selecting the optimum preprocessing strategy would prompt machine learning prediction models to achieve better accuracy as compared to the actual prices.



2017 ◽  
Vol 18 (4) ◽  
pp. 890-896 ◽  
Author(s):  
A. M. M. Kotsopoulos ◽  
F. Böing-Messing ◽  
N. E. Jansen ◽  
P. Vos ◽  
W. F. Abdo


2020 ◽  
Vol 3 (3) ◽  
pp. 138-146
Author(s):  
Camilla Matos Pedreira ◽  
José Alves Barros Filho ◽  
Carolina Pereira ◽  
Thamine Lessa Andrade ◽  
Ricardo Mingarini Terra ◽  
...  

Objectives: This study aims to evaluate the impact of using three predictive models of lung nodule malignancy in a population of patients at high-risk for neoplasia according to previous analysis by physicians, as well as evaluate the clinical and radiological malignancy-predictors of the images. Material and Methods: This is a retrospective cohort study, with 135 patients, undergone surgical in the period from 01/07/2013 to 10/05/2016. The study included nodules with dimensions between 5mm and 30mm, excluding multiple nodules, alveolar consolidation, pleural effusion, and lymph node enlargement. The main variables analyzed were age, sex, smoking history, extrathoracic cancer, diameter, location, and presence of spiculation. The calculation of the area under the ROC curve assessed the accuracy of each prediction model. Results: The study analyzed 135 individuals, of which 96 (71.1%) had malignant nodules. The areas under the ROC curves for each prediction model were: Swensen 0.657; Brock 0.662; and Herder 0.633. The models Swensen, Brock, and Herder presented positive predictive values in high-risk patients, corresponding to 83.3%, 81.8%, and 82.9%, respectively. Patients with the intermediate and low-risk presented a high malignant nodule rate, ranging from 69.3-72.5% and 42.8-52.6%, respectively. Conclusion: None of the three quantitative models analyzed in this study was considered satisfactory (AUC> 0.7) and should be used with caution after specialized evaluation to avoid underestimation of the risk of neoplasia. The pretest calculations might not contemplate other factors than those predicted in the regressions, that could present a role in the clinical decision of resection.



Author(s):  
Chien-Cheng Jung ◽  
Wan-Yi Lin ◽  
Nai-Yun Hsu ◽  
Chih-Da Wu ◽  
Hao-Ting Chang ◽  
...  

Exposure to indoor particulate matter less than 2.5 µm in diameter (PM2.5) is a critical health risk factor. Therefore, measuring indoor PM2.5 concentrations is important for assessing their health risks and further investigating the sources and influential factors. However, installing monitoring instruments to collect indoor PM2.5 data is difficult and expensive. Therefore, several indoor PM2.5 concentration prediction models have been developed. However, these prediction models only assess the daily average PM2.5 concentrations in cold or temperate regions. The factors that influence PM2.5 concentration differ according to climatic conditions. In this study, we developed a prediction model for hourly indoor PM2.5 concentrations in Taiwan (tropical and subtropical region) by using a multiple linear regression model and investigated the impact factor. The sample comprised 93 study cases (1979 measurements) and 25 potential predictor variables. Cross-validation was performed to assess performance. The prediction model explained 74% of the variation, and outdoor PM2.5 concentrations, the difference between indoor and outdoor CO2 levels, building type, building floor level, bed sheet cleaning, bed sheet replacement, and mosquito coil burning were included in the prediction model. Cross-validation explained 75% of variation on average. The results also confirm that the prediction model can be used to estimate indoor PM2.5 concentrations across seasons and areas. In summary, we developed a prediction model of hourly indoor PM2.5 concentrations and suggested that outdoor PM2.5 concentrations, ventilation, building characteristics, and human activities should be considered. Moreover, it is important to consider outdoor air quality while occupants open or close windows or doors for regulating ventilation rate and human activities changing also can reduce indoor PM2.5 concentrations.



BMC Cancer ◽  
2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Bogdan Grigore ◽  
Ruth Lewis ◽  
Jaime Peters ◽  
Sophie Robinson ◽  
Christopher J. Hyde

Abstract Background Tools based on diagnostic prediction models are available to help general practitioners (GP) diagnose colorectal cancer. It is unclear how well they perform and whether they lead to increased or quicker diagnoses and ultimately impact on patient quality of life and/or survival. The aim of this systematic review is to evaluate the development, validation, effectiveness, and cost-effectiveness, of cancer diagnostic tools for colorectal cancer in primary care. Methods Electronic databases including Medline and Web of Science were searched in May 2017 (updated October 2019). Two reviewers independently screened titles, abstracts and full-texts. Studies were included if they reported the development, validation or accuracy of a prediction model, or assessed the effectiveness or cost-effectiveness of diagnostic tools based on prediction models to aid GP decision-making for symptomatic patients presenting with features potentially indicative of colorectal cancer. Data extraction and risk of bias were completed by one reviewer and checked by a second. A narrative synthesis was conducted. Results Eleven thousand one hundred thirteen records were screened and 23 studies met the inclusion criteria. Twenty-studies reported on the development, validation and/or accuracy of 13 prediction models: eight for colorectal cancer, five for cancer areas/types that include colorectal cancer. The Qcancer models were generally the best performing. Three impact studies met the inclusion criteria. Two (an RCT and a pre-post study) assessed tools based on the RAT prediction model. The third study looked at the impact of GP practices having access to RAT or Qcancer. Although the pre-post study reported a positive impact of the tools on outcomes, the results of the RCT and cross-sectional survey found no evidence that use of, or access to, the tools was associated with better outcomes. No study evaluated cost effectiveness. Conclusions Many prediction models have been developed but none have been fully validated. Evidence demonstrating improved patient outcome of introducing the tools is the main deficiency and is essential given the imperfect classification achieved by all tools. This need is emphasised by the equivocal results of the small number of impact studies done so far.



2020 ◽  
Vol 74 ◽  
pp. 05024
Author(s):  
Lucia Svabova ◽  
Lucia Michalkova

The creation of prediction models to reveal the threat of financial difficulties of the companies is realized by the application of various multivariate statistical methods. From a global perspective, prediction models serve to classify a company into a group of prosperous or non-prosperous companies, or to quantify the probability of financial difficulties in the company. In many countries around the world, real financial data about the companies are used in developing these prediction models. In Slovakia, standard data from the financial statements and annual reports of Slovak companies are used for the creation of the company’s failure model. Since in this case there are generally large data files, it is necessary to pre-process the data by the selected methods before the prediction model is constructed. A database of the companies needs to be prepared for the subsequent application of statistical methods, and it is also highly appropriate to focus globally on the detection of potential extreme and remote observations. Therefore, the article will focus on quantifying the impact of the data structure detected, for example, the occurrence of extreme and remote observations in the data set, on the resulting overall classification of the prediction ability of the models created.



2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Youjin Jang ◽  
Inbae Jeong ◽  
Yong K. Cho

PurposeThe study seeks to identify the impact of variables in a deep learning-based bankruptcy prediction model, which has achieved superior performance to other prediction models but cannot easily interpret hidden processes.Design/methodology/approachThis study developed three LSTM-RNN–based models that predicted the probability of bankruptcy before 1, 2 and 3 years using financial, the construction market and macroeconomic variables as input variables. Then, the impacts of the input variables that affected prediction accuracy in each model were identified by using Shapley value and compared among the three models. This study also investigated the prediction accuracy using variants of input variables grouped sequentially by high-impact ranking.FindingsThe results showed that the prediction accuracies were largely impacted by “housing starts” in all models. As the prediction period increased, the effects of macroeconomic variables on prediction accuracy increased, whereas the impact of “return on assets” on prediction accuracy decreased. It also found that the “current ratio” and “debt ratio” significantly influenced the prediction accuracies in all models. Also, the results revealed that similar prediction accuracies could be achieved using only 8, 10, and 10 variables out of a total of 18 variables for the 1-, 2-, and 3-year prediction models, respectively.Originality/valueThis study provides a Shapley value-based approach to identify how each input variable in a deep-learning bankruptcy prediction model. The findings of this study can not only assist in obtaining better insights into the underlying concept of bankruptcy but also use to select variables by removing those identified as less significant.



2021 ◽  
Vol 9 ◽  
Author(s):  
Yikang Wang ◽  
Liying Zhang ◽  
Miaomiao Niu ◽  
Ruiying Li ◽  
Runqi Tu ◽  
...  

Background: Previous studies have constructed prediction models for type 2 diabetes mellitus (T2DM), but machine learning was rarely used and few focused on genetic prediction. This study aimed to establish an effective T2DM prediction tool and to further explore the potential of genetic risk scores (GRS) via various classifiers among rural adults.Methods: In this prospective study, the GRS for a total of 5,712 participants from the Henan Rural Cohort Study was calculated. Cox proportional hazards (CPH) regression was used to analyze the associations between GRS and T2DM. CPH, artificial neural network (ANN), random forest (RF), and gradient boosting machine (GBM) were used to establish prediction models, respectively. The area under the receiver operating characteristic curve (AUC) and net reclassification index (NRI) were used to assess the discrimination ability of the models. The decision curve was plotted to determine the clinical-utility for prediction models.Results: Compared with the individuals in the lowest quintile of the GRS, the HR (95% CI) was 2.06 (1.40 to 3.03) for those with the highest quintile of GRS (Ptrend < 0.05). Based on conventional predictors, the AUCs of the prediction model were 0.815, 0.816, 0.843, and 0.851 via CPH, ANN, RF, and GBM, respectively. Changes with the integration of GRS for CPH, ANN, RF, and GBM were 0.001, 0.002, 0.018, and 0.033, respectively. The reclassifications were significantly improved for all classifiers when adding GRS (NRI: 41.2% for CPH; 41.0% for ANN; 46.4% for ANN; 45.1% for GBM). Decision curve analysis indicated the clinical benefits of model combined GRS.Conclusion: The prediction model combined with GRS may provide incremental predictions of performance beyond conventional factors for T2DM, which demonstrated the potential clinical use of genetic markers to screen vulnerable populations.Clinical Trial Registration: The Henan Rural Cohort Study is registered in the Chinese Clinical Trial Register (Registration number: ChiCTR-OOC-15006699). http://www.chictr.org.cn/showproj.aspx?proj=11375.



Author(s):  
Ji-Sun Kang Et.al

For well-resolving extreme weather events, running numerical weather prediction model with high resolution in time and space is essential. We explore how efficiently such modeling could be, using NURION. We have examined one of community numerical weather prediction models, WRF, and KISTI’s 5th supercomputer NURION of national HPC. Scalability of the model has been tested at first, and we have compared the computational efficiency of hybrid openMP + MPI runs with pure MPI runs. In addition to those parallel computing experiments, we have tested a new storage layer called burst buffer to see whether it can accelerate frequent I/O. We found that there are significant differences between the computational environments for running WRF model. First of all, we have tested a sensitivity of computational efficiency to the number of cores per node. The sensitivity experiments certainly tell us that using all cores per node does not guarantee the best results, rather leaving several cores per node could give more stable and efficient computation. For the current experimental configuration of WRF, moreover, pure MPI runs gives much better computational performance than any hybrid openMP + MPI runs. Lastly, we have tested burst buffer storage layer that is expected to accelerate frequent I/O. However, our experiments show that its impact is not consistently positive. We clearly confirm the positive impact with relatively smaller problem size experiments while the impact was not seen with bigger problem experiments. Significant sensitivity to the different computational configurations shown this paper strongly suggests that HPC users should find out the best computing environment before massive use of their applications



Sign in / Sign up

Export Citation Format

Share Document