scholarly journals Heat Stroke Prevention in Hot Specific Occupational Environment Enhanced by Supervised Machine Learning with Personalized Vital Signs

Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 395
Author(s):  
Takunori Shimazaki ◽  
Daisuke Anzai ◽  
Kenta Watanabe ◽  
Atsushi Nakajima ◽  
Mitsuhiro Fukuda ◽  
...  

Recently, wet-bulb globe temperature (WBGT) has attracted a lot of attention as a useful index for measuring heat strokes even when core body temperature cannot be available for the prevention. However, because the WBGT is only valid in the vicinity of the WBGT meter, the actual ambient heat could be different even in the same room owing to ventilation, clothes, and body size, especially in hot specific occupational environments. To realize reliable heat stroke prevention in hot working places, we proposed a new personalized vital sign index, which is combined with several types of vital data, including the personalized heat strain temperature (pHST) index based on the temperature/humidity measurement to adjust the WBGT at the individual level. In this study, a wearable device was equipped with the proposed pHST meter, a heart rate monitor, and an accelerometer. Additionally, supervised machine learning based on the proposed personalized vital index was introduced to improve the prevention accuracy. Our developed system with the proposed vital sign index achieved a prevention accuracy of 85.2% in a hot occupational experiment in the summer season, where the true positive rate and true negative rate were 96.3% and 83.7%, respectively.

2016 ◽  
Vol 25 (4) ◽  
pp. 515-528 ◽  
Author(s):  
Ross Stewart Sparks ◽  
Chris Okugami

AbstractThe vital signs of chronically ill patients are monitored daily. The record flags when a specific vital sign is stable or when it trends into dangerous territory. Patients also self-assess their current state of well-being, i.e. whether they are feeling worse than usual, neither unwell nor very well compared to usual, or are feeling better than usual. This paper examines whether past vital sign data can be used to forecast how well a patient is going to feel the next day. Reliable forecasting of a chronically sick patient’s likely state of health would be useful in regulating the care provided by a community nurse, scheduling care when the patient needs it most. The hypothesis is that the vital signs indicate a trend before a person feels unwell and, therefore, are lead indicators of a patient going to feel unwell. Time series and classification or regression tree methods are used to simplify the process of observing multiple measurements such as body temperature, heart rate, etc., by selecting the vital sign measures, which best forecast well-being. We use machine learning techniques to automatically find the best combination of these vital sign measurements and their rules that forecast the wellness of individual patients. The machine learning models provide rules that can be used to monitor the future wellness of a patient and regulate their care plans.


2021 ◽  
Author(s):  
Naveena Yanamala ◽  
Nanda H. Krishna ◽  
Quincy A. Hathaway ◽  
Aditya Radhakrishnan ◽  
Srinidhi Sunkara ◽  
...  

AbstractPatients with influenza and SARS-CoV2/Coronavirus disease 2019 (COVID-19) infections have different clinical course and outcomes. We developed and validated a supervised machine learning pipeline to distinguish the two viral infections using the available vital signs and demographic dataset from the first hospital/emergency room encounters of 3,883 patients who had confirmed diagnoses of influenza A/B, COVID-19 or negative laboratory test results. The models were able to achieve an area under the receiver operating characteristic curve (ROC AUC) of at least 97% using our multiclass classifier. The predictive models were externally validated on 15,697 encounters in 3,125 patients available on TrinetX database that contains patient-level data from different healthcare organizations. The influenza vs. COVID-19-positive model had an AUC of 98%, and 92% on the internal and external test sets, respectively. Our study illustrates the potentials of machine-learning models for accurately distinguishing the two viral infections. The code is made available at https://github.com/ynaveena/COVID-19-vs-Influenza and may be have utility as a frontline diagnostic tool to aid healthcare workers in triaging patients once the two viral infections start cocirculating in the communities.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Naveena Yanamala ◽  
Nanda H. Krishna ◽  
Quincy A. Hathaway ◽  
Aditya Radhakrishnan ◽  
Srinidhi Sunkara ◽  
...  

AbstractPatients with influenza and SARS-CoV2/Coronavirus disease 2019 (COVID-19) infections have a different clinical course and outcomes. We developed and validated a supervised machine learning pipeline to distinguish the two viral infections using the available vital signs and demographic dataset from the first hospital/emergency room encounters of 3883 patients who had confirmed diagnoses of influenza A/B, COVID-19 or negative laboratory test results. The models were able to achieve an area under the receiver operating characteristic curve (ROC AUC) of at least 97% using our multiclass classifier. The predictive models were externally validated on 15,697 encounters in 3125 patients available on TrinetX database that contains patient-level data from different healthcare organizations. The influenza vs COVID-19-positive model had an AUC of 98.8%, and 92.8% on the internal and external test sets, respectively. Our study illustrates the potentials of machine-learning models for accurately distinguishing the two viral infections. The code is made available at https://github.com/ynaveena/COVID-19-vs-Influenza and may have utility as a frontline diagnostic tool to aid healthcare workers in triaging patients once the two viral infections start cocirculating in the communities.


2016 ◽  
Vol 44 (7) ◽  
pp. e456-e463 ◽  
Author(s):  
Lujie Chen ◽  
Artur Dubrawski ◽  
Donghan Wang ◽  
Madalina Fiterau ◽  
Mathieu Guillame-Bert ◽  
...  

2019 ◽  
Author(s):  
Elizabeth A. Proctor ◽  
Shauna M. Dineen ◽  
Stephen C. Van Nostrand ◽  
Madison K. Kuhn ◽  
Christopher D. Barrett ◽  
...  

AbstractHeat stroke is a life-threatening condition characterized by loss of thermoregulation and severe elevation of core body temperature, which can cause organ failure and damage to the central nervous system. While no definitive test exists to measure heat stroke severity, immune challenge is known to increase heat stroke risk, although the mechanism of this increased risk is unclear. In this study, we used a mouse model of classic heat stroke to test the effect of immune challenge on pathology. Employing multivariate supervised machine learning to identify patterns of molecular and cellular markers associated with heat stroke, we found that prior viral infection simulated with poly I:C injection resulted in heat stroke presenting with high levels of factors indicating coagulopathy. Despite a decreased number of platelets in the blood, platelets are large and non-uniform in size, suggesting younger, more active platelets. Levels of D-dimer and soluble thrombomodulin were increased in more severe heat stroke, and in cases presenting with the highest level of organ damage markers D-dimer levels dropped, indicating potential fibrinolysis-resistant thrombosis. Genes corresponding to immune response, coagulation, hypoxia, and vessel repair were up-regulated in kidneys of heat-challenged animals, and these increases correlated with both viral treatment and distal organ damage while appearing before discernible tissue damage to the kidney itself. We conclude that heat stroke-induced coagulopathy may be a driving mechanistic force in heat stroke pathology, especially when exacerbated by prior infection, and that coagulation markers may serve as an accessible biomarker for heat stroke severity and therapeutic strategies.Key pointsA signature of pro-coagulation markers predicts circadian core body temperature and levels of organ damage in heat strokeChanges in coagulopathy-related gene expression are evidenced before histopathological organ damage


2018 ◽  
Vol 11 (6) ◽  
Author(s):  
Julian Wolf ◽  
Stephan Hess ◽  
David Bachmann ◽  
Quentin Lohmeyer ◽  
Mirko Meboldt

For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assignment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quantitatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm’s performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 hours of eye tracking recording for the total procedure, respectively 1 hour considering human working time only. Together with a real-time capability of the mapping process after completed training, even hours of eye tracking recording can be evaluated efficiently. (Code and video examples have been made available at: https://gitlab.ethz.ch/pdz/cgom.git)


Blood ◽  
2018 ◽  
Vol 132 (Supplement 1) ◽  
pp. 711-711
Author(s):  
Sanjeet Dadwal ◽  
Zahra Eftekhari ◽  
Tushondra Thomas ◽  
Deron Johnson ◽  
Dongyun Yang ◽  
...  

Abstract Sepsis and severe sepsis contribute significantly to early treatment-related mortality after hematopoietic cell transplantation (HCT), with reported mortality rates of 30 and 55% due to severe sepsis, during engraftment admission, for autologous and allogeneic HCT, respectively. Since the clinical presentation and characteristics of sepsis immediately after HCT can be different from that seen in general population or those who are receiving non-HCT chemotherapy, detecting early signs of sepsis in HCT recipients becomes critical. Herein, we developed and validated a machine-learning based sepsis prediction model for patients who underwent HCT at City of Hope, using variables within the Electronic Health Record (EHR) data. We evaluated a consecutive case series of 1046 HCTs (autologous: n=491, allogeneic: n=555) at our center between 2014 and 2017. The median age at the time of HCT was 56 years (range: 18-78). For this analysis, the primary clinical event was sepsis diagnosis within 100 days post-HCT, identified based on - use of the institutional sepsis management order set and mention of "sepsis" in the progress notes. The time of sepsis order set was considered as time of sepsis for analyses. To train the model, 829 visits (104 septic and 725 non-septic) and their data were used, while 217 visits (31 septic and 186 non-septic) were used as a validation cohort. At each hour after HCT, when a new data point was available, 47 variables were calculated from each patient's data and a risk score was assigned to each time point. These variables consisted of patient demographics, transplant type, regimen intensity, disease status, Hematopoietic cell transplantation - specific comorbidity index, lab values, vital signs, medication orders, and comorbidities. For the 829 visits in the training dataset, the 47 variables were calculated at 220,889 different time points, resulting in a total of 10,381,783 data points. Lab values and vital signs were considered as changes from individual patient's baselines at each time point. The baseline for each lab value and vital sign were the last measured values before HCT. An ensemble of 20 random forest binary classification models were trained to identify and learn patterns of data for HCT patients at high risk for sepsis and differentiate them from patients at lower sepsis risk. To help the model learning patterns of data prior to sepsis, available data from septic patients' within 24 hours preceding diagnosis of sepsis was used. For 829 septic visits in the training data set, there were 5048 time points, each having 47 variables. Variable importance for the 20 models was assessed using Gini mean decrease accuracy method. The sum of importance values from each model was calculated for each variable as the final importance value. Figure 1a shows the importance of variables using this method. Testing the model on the validation cohort results in an AUC of 0.85 on the test dataset (Figure 1b). At a threshold of 0.6, our model was 0.32 sensitive and 0.96 specific. At this threshold, this model identified 10 out of 31 patients with a median lead time of 119.5 hours, of which 2 patients were flagged as high risk at the time of transplant and developed sepsis at 17 and 60 days post-HCT. The lead time is what truly sets this predictive model apart from detective models with organ failure or dysfunction or other deterioration metrics as their detection criteria. At a threshold of 0.4, our model has 0.9 sensitivity and 0.65 specificity. In summary, a machine-learning sepsis prediction model can be tailored towards HCT recipients to improve the quality of care, prevent sepsis associated-organ damage and decrease mortality post-HCT. Our model significantly outperforms widely used Modified Early Warning Score (MEWS), with AUC of 0.73 in general population. Possible application of our model include showing a "red flag" at a threshold of 0.6 (0.32 true positive rate and 0.04 false positive rate) for antibiotic initiation/modification, and a "yellow flag" at a threshold of 0.4 (0.9 true positive rate and 0.35 false positive rate) suggesting closer monitoring or less aggressive treatments for the patient. Figure 1. Figure 1. Disclosures Dadwal: MERK: Consultancy, Membership on an entity's Board of Directors or advisory committees, Research Funding, Speakers Bureau; Gilead: Research Funding; AiCuris: Research Funding; Shire: Research Funding.


2020 ◽  
Vol 38 (15_suppl) ◽  
pp. e13040-e13040
Author(s):  
Jose Manuel Jerez ◽  
Nuria Ribelles ◽  
Pablo Rodriguez-Brazzarola ◽  
Tamara Diaz Redondo ◽  
Begoña Jimenez Rodriguez ◽  
...  

e13040 Background: The treatment of luminal MBC has undergone a substantial change with the use of cyclin dependent kinase 4/6 inhibitors (CDKIs). Nevertheless, there is not a clearly defined subgroup of patients who do not initially respond to CDKIs and show EP. Methods: MBC ER+/HER2- patients who have received at least one line of treatment were eligible. The event of interest was disease progression within 6 months of first line treatment according to the type of therapy administered. The first line treatments were categorized in chemotherapy (CT), hormonal therapy (HT), CT plus maintenance HT and HT plus CDKIs. Free text data from clinical visits registered in our Electronic Health Record were obtained until the date of first treatment in order to generate a feature vector composed of the word frequencies for each visit of every patient. Six different machine learning algorithms were evaluated to predict the event of interest and to obtain the risk of EP for every type of therapy. Area under the ROC curve (AUC), True Positive Rate (TPR) and True Negative Rate (TNR) were assessed using 10-fold cross validation. Results: 610 RE+/HER2- MBC treated between November 1991 and August 2019 were included. Median follow up for metastatic disease was 28 months. 17426 clinical visits were analyzed (per patient: range 1-173; median 30). 119 patients received CT as first line treatment, 311 HT, 117 CT plus maintenance HT and 63 HT plus CDKIs. There were 379 patients with disease progression, from which 126 were within 6 months from first line treatment (54 events with CT, 57 with HT, 4 with CT plus maintenance HT and 11 with HT plus CDKIs). The model that yields the best results was the GLMBoost algorithm: AUC 0.72 (95%CI 0.67-0.77), TPR 70.85% (95%CI 70.63%-71.06%), TNR 66.27% (95% 66.08%-66.46%). Conclusions: Our model based on unstructured data from real-world patients predicts EP and establishes the risk for each of the different types of treatment for MBC ER+/HER2-. Obviously an additional validation is needed, but a tool with these characteristics could help to select the best available treatment when that decision has to be made, avoiding those therapies that are probably not to be effective.


Author(s):  
Monika Filipovska ◽  
Hani S. Mahmassani

Traffic flow breakdown is the abrupt shift from operation at free-flow conditions to congested conditions and is typically the result of complex interactions in traffic dynamics. Because of its stochastic nature, breakdown is commonly predicted only in a probabilistic manner. This paper focuses on using stationary aggregated traffic data to capture traffic dynamics, developing and testing machine learning (ML) approaches for traffic breakdown prediction and comparing them with the traditionally used probabilistic approaches. The contribution of this study is three-fold: it explores the usefulness of temporally and spatially lagged detector data in predicting traffic flow breakdown occurrence, it develops and tests ML approaches for traffic breakdown prediction using this data, and it compares the predictive power and performance of these approaches with the traditionally used probabilistic methods. Feature selection results indicate that breakdown prediction benefits greatly from the inclusion of temporally and spatially lagged variables. Comparing the performance of the ML methods with the probabilistic approaches, ML methods achieve better prediction performance in relation to the class-balanced accuracy, true positive rate (recall), true negative rate (specificity), and positive predictive value (precision). Depending on the application of the prediction approach, the method selection criteria may differ on a case-by-case basis. Overall, the best performance was achieved by the neural network and support vector machine approaches with class balancing, and with the random forest approach without class balancing. Recommendations on the choice of prediction approaches based on the specific application objectives are also given.


2019 ◽  
Vol 18 (4) ◽  
pp. 208-209
Author(s):  
John Kellett ◽  

Intensively monitoring severely ill patients is like placing a smoke alarm in a burning building: it makes no sense. Smoke alarms only makes sense if they are placed in buildings before a fire starts, or after a fire has been extinguished in order to make sure it does not start again. Therefore, logic suggests that it is more important to monitor sick patients with normal vital signs in order to detect any deterioration as early as possible, or AFTER a severe illness in order to ensure they do not relapse, and it is safe for them to be discharged from hospital and return home. Paradoxically, it may be a lot more difficult to determine from vital sign changes if a patient is getting better than if he or she is getting worse. Consider an unfortunate victim hurled into the Colosseum in Rome to be chased by a lion for the amusement of the crowd. On the first lap around the arena the victim’s vital signs are likely to be at their maximum derangement. However, no one in the arena will imagine that the slowing of his heart and respiratory rate on the second and subsequent laps signifies an improvement in his situation, unless the lion is removed. How long after the danger from lion has gone will it take for the victim’s vital signs to return to normal? This will depend on several things, such as the victim’s prior level of health and fitness, other ill-understood emotional and physiological factors, and on if another lion enters the arena. In this edition of Acute Medicine, Subbe et al1 report a system that identifies patients fit for hospital discharge by analyzing trends in vital sign recordings made every four hours. A machine learning algorithm was able to identify clinical stability within just 12 hours of observation (i.e. 3 sets of vital signs), three times faster than a traditional manual system. Before these impressive results are accepted at face value two important caveats that must be considered: firstly the definition of clinical stability was arbitrary, and secondly the acceptable failure rate of the system was determined by present day readmission rates for medical emergency admissions of 12-13%,2 which some might consider a very low bar. Nevertheless, further development of this technology, especially if applied to continuous measurement of vital signs by wearable devices, is likely to allow earlier detection and discharge of stable patients, thus reducing the pressure on overworked emergency departments and acute medical units. A more pressing question than identifying patients fit for discharge is the assessment and monitoring of sick patients who present with normal or near normal vital signs. These patients account for 60-70% of patients admitted to hospital.3 Although many will develop vital sign changes during their admission, only a small minority of these patients will die in hospital, and many of them will die with minimal vital sign derangement or even normal vital signs.4 Yet, it is these infrequent deaths that cause the most concern and angst. They nearly always result in an inquest or inquiry, which start with de facto assumption that all those involved with the patient’s care were in some way to blame. Most medical illness starts with the patient having nonspecific feelings of being unwell. The interval between these subjective nonspecific symptoms and the development of specific symptoms and objective signs may be seconds in acute cardiac disease, minutes in meningococcal sepsis, and hours or even days in other conditions. It should not be surprising that the deterioration of such patients is often missed, especially if it is gradual. If these patients are only monitored intermittently it is highly likely that important blips in their vital signs will be missed, along with the opportunity to save them. For example, vital signs recorded every 4 hours would not detect the rapid deterioration of conditions such as meningococcal septicaemia. On the other hand, the overwhelming majority who do not die will also develop unimportant vital sign abnormalities, which will require no intervention and should be ignored. It may seem that the obvious solution to this conundrum is the continuous monitored of these patients by machine-learning computer algorithms. However, maybe this technology does not need to be applied to all of them. It may be possible to identify at initial assessment patients who are clinically stable and, therefore, extremely unlikely to die. In addition to vital signs,5 impaired mobility has been shown to be a predictor of mortality, and normal mobility a powerful predictor of survival.6Biomarkers,7 ECG changes8 and most importantly, the patient’s subjective feelings and symptoms9 may also help identify clinically stable patients who are highly unlikely to deteriorate. It may also be that clinical stability could be determined by continuously monitoring patients for a short time using machine-learning algorithms.10 These are all interesting and exciting possibilities, just waiting for to be tried and tested. Artificial intelligence and computer technology have much to offer acute medicine, but maybe there is still a role for touching, feeling, observing and talking to patients.


Sign in / Sign up

Export Citation Format

Share Document