scholarly journals Paediatric major incident triage and the use of machine learning techniques to develop an alternative triage tool with improved performance characteristics.

Author(s):  
Saisakul Chernbumroong ◽  
James Vassallo ◽  
Nabeela Malik ◽  
Yuanwei Xu ◽  
Damian Keene ◽  
...  

Background Triage is a key principle in the effective management of major incidents. However, there is an increasing body of evidence demonstrating that existing paediatric methods are associated with high rates of under-triage and are not fit for purpose. The aim of this study was to derive a novel paediatric triage tool using machine learning (ML) techniques. Methods The United Kingdom Trauma Audit Research Network (TARN) database was interrogated for all paediatric patients aged under 16 years for the ten-year period 2008-2017. Patients were categorised as Priority One if they received one or more life-saving interventions from a previously defined list. Six ML algorithms were investigated for identifying patients as Priority One. Subsequently, the best performing model was chosen for further development using a risk score approach and clinically relevant modifications in order to derive a novel triage tool (LASSO M2). Using patients with complete pre-hospital physiological data, a comparative analysis was then performed comparing this to existing pre-hospital paediatric major incident triage tools. Performance was evaluated using sensitivity, specificity, under-triage (1-sensitivity) and over-triage (1-positive predictive value). Results Complete physiological data were available for 4962 patients. The LASSO M2 model demonstrated the best performance at identifying paediatric patients in need of life-saving intervention, sensitivity 88.8% (95% CI 85.5, 91.5) and was associated with the lowest rate of under-triage, 11.2% (8.5, 14.5). In contrast, the Paediatric Triage Tape and JumpSTART both had poor sensitivity when identifying those requiring life-saving intervention (36.1% (31.8, 40.7) and 44.7% (40.2, 49.4)) respectively. Conclusion The ML derived triage tool (LASSO M2) outperforms existing methods of paediatric major incident triage at identifying patients in need of life-saving intervention. Prior to its recommendation for clinical use, further work is required to externally validate its performance and undertake a feasibility assessment in a clinical context.

2021 ◽  
pp. emermed-2021-211706
Author(s):  
James Vassallo ◽  
Saisakul Chernbumroong ◽  
Nabeela Malik ◽  
Yuanwei Xu ◽  
Damian Keene ◽  
...  

IntroductionTriage is a key principle in the effective management of major incidents. There is currently a paucity of evidence to guide the triage of children. The aim of this study was to perform a comparative analysis of nine adult and paediatric triage tools, including the novel ‘Sheffield Paediatric Triage Tool’ (SPTT), assessing their ability in identifying patients needing life-saving interventions (LSIs).MethodsA 10-year (2008–2017) retrospective database review of the Trauma Audit Research Network (TARN) Database for paediatric patients (<16 years) was performed. Primary outcome was identification of patients receiving one or more LSIs from a previously defined list. Secondary outcomes included mortality and prediction of Injury Severity Score (ISS) >15. Primary analysis was conducted on patients with complete prehospital physiological data with planned secondary analyses using first recorded data. Performance characteristics were evaluated using sensitivity, specificity, undertriage and overtriage.Results15 133 patients met TARN inclusion criteria. 4962 (32.8%) had complete prehospital physiological data and 8255 (54.5%) had complete first recorded physiological data. The majority of patients were male (69.5%), with a median age of 11.9 years. The overwhelming majority of patients (95.4%) sustained blunt trauma, yielding a median ISS of 9 and overall, 875 patients (17.6%) received at least one LSI. The SPTT demonstrated the greatest sensitivity of all triage tools at identifying need for LSI (92.2%) but was associated with the highest rate of overtriage (75.0%). Both the Paediatric Triage Tape (sensitivity 34.1%) and JumpSTART (sensitivity 45.0%) performed less well at identifying LSI. By contrast, the adult Modified Physiological Triage Tool-24 (MPTT-24) triage tool had the second highest sensitivity (80.8%) with tolerable rates of overtriage (70.2%).ConclusionThe SPTT and MPTT-24 outperform existing paediatric triage tools at identifying those patients requiring LSIs. This may necessitate a change in recommended practice. Further work is needed to determine the optimum method of paediatric major incident triage, but consideration should be given to simplifying major incident triage by the use of one generic tool (the MPTT-24) for adults and children.


2021 ◽  
Author(s):  
James Vassallo ◽  
Saisakul Chernbumroong ◽  
Nabeela Malik ◽  
Yuanwei Xu ◽  
Damian Keene ◽  
...  

Introduction. Triage is a key principle in the effective management of major incidents. There is currently a paucity of evidence to guide the triage of children. The aim of this study was to perform a comparative analysis of nine adult and paediatric triage tools, including the novel Sheffield Paediatric Triage Tool (SPTT), assessing their ability in identifying patients needing life-saving interventions (LSI). Methods A ten-year retrospective database review of TARN data for paediatric patients (<16years) was performed. Primary outcome was identification of patients receiving one or more LSIs from a previously defined list. Secondary outcomes included mortality and prediction of ISS>15. Primary analysis was conducted on patients with complete pre-hospital physiological data with planned secondary analyses using first recorded physiological data. Performance characteristics were evaluated using sensitivity, specificity, under and over-triage. Results 15133 patients met TARN inclusion criteria. 4962 (32.8%) had complete pre-hospital physiological data and 8255 (54.5%) had complete first recorded data. Male patients predominated (69.5%), sustaining blunt trauma (95.4%) with a median ISS of 9. 875 patients (17.6%) received at least one LSI. The SPTT demonstrated the greatest sensitivity of all triage tools at identifying need for LSI (92.2%) but was associated with the highest rate of over-triage (75.0%). Both the PTT (sensitivity 34.1%) and JumpSTART (sensitivity 45.0%) performed less well at identifying LSI. By contrast, the adult MPTT-24 triage tool had the second highest sensitivity (80.8%) with tolerable rates of over-triage (70.2%). Conclusion The SPTT and MPTT-24 outperform existing paediatric triage tools at identifying those patients requiring LSIs. This may necessitate a change in recommended practice. Further work is needed to determine the optimum method of paediatric major incident triage, but consideration should be given to simplifying major incident triage by the use of one generic tool (the MPTT-24) for adults and children.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3616
Author(s):  
Jan Ubbo van Baardewijk ◽  
Sarthak Agarwal ◽  
Alex S. Cornelissen ◽  
Marloes J. A. Joosen ◽  
Jiska Kentrop ◽  
...  

Early detection of exposure to a toxic chemical, e.g., in a military context, can be life-saving. We propose to use machine learning techniques and multiple continuously measured physiological signals to detect exposure, and to identify the chemical agent. Such detection and identification could be used to alert individuals to take appropriate medical counter measures in time. As a first step, we evaluated whether exposure to an opioid (fentanyl) or a nerve agent (VX) could be detected in freely moving guinea pigs using features from respiration, electrocardiography (ECG) and electroencephalography (EEG), where machine learning models were trained and tested on different sets (across subject classification). Results showed this to be possible with close to perfect accuracy, where respiratory features were most relevant. Exposure detection accuracy rose steeply to over 95% correct during the first five minutes after exposure. Additional models were trained to correctly classify an exposed state as being induced either by fentanyl or VX. This was possible with an accuracy of almost 95%, where EEG features proved to be most relevant. Exposure detection models that were trained on subsets of animals generalized to subsets of animals that were exposed to other dosages of different chemicals. While future work is required to validate the principle in other species and to assess the robustness of the approach under different, realistic circumstances, our results indicate that utilizing different continuously measured physiological signals for early detection and identification of toxic agents is promising.


Author(s):  
Ramakanta Mohanty ◽  
Vadlamani Ravi

The past 10 years have seen the prediction of software defects proposed by many researchers using various metrics based on measurable aspects of source code entities (e.g. methods, classes, files or modules) and the social structure of software project in an effort to predict the software defects. However, these metrics could not predict very high accuracies in terms of sensitivity, specificity and accuracy. In this chapter, we propose the use of machine learning techniques to predict software defects. The effectiveness of all these techniques is demonstrated on ten datasets taken from literature. Based on an experiment, it is observed that PNN outperformed all other techniques in terms of accuracy and sensitivity in all the software defects datasets followed by CART and Group Method of data handling. We also performed feature selection by t-statistics based approach for selecting feature subsets across different folds for a given technique and followed by the feature subset selection. By taking the most important variables, we invoked the classifiers again and observed that PNN outperformed other classifiers in terms of sensitivity and accuracy. Moreover, the set of ‘if- then rules yielded by J48 and CART can be used as an expert system for prediction of software defects.


Author(s):  
Joy Iong-Zong Chen ◽  
Kong-Long Lai

The design of an analogue IC layout is a time-consuming and manual process. Despite several studies in the sector, some geometric restrictions have resulted in disadvantages in the process of automated analogue IC layout design. As a result, analogue design has a performance lag when compared to manual design. This prevents the deployment of a large range of automated tools. With the recent technical developments, this challenge is resolved using machine learning techniques. This study investigates performance-driven placement in the VLSI IC design process, as well as analogue IC performance prediction by utilizing various machine learning approaches. Further, several amplifier designs are simulated. From the simulation results, it is evident that, when compared to the manual layout, an improved performance is obtained by using the proposed approach.


10.2196/24698 ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. e24698
Author(s):  
Sina Ehsani ◽  
Chandan K Reddy ◽  
Brandon Foreman ◽  
Jonathan Ratcliff ◽  
Vignesh Subbian

Background With advances in digital health technologies and proliferation of biomedical data in recent years, applications of machine learning in health care and medicine have gained considerable attention. While inpatient settings are equipped to generate rich clinical data from patients, there is a dearth of actionable information that can be used for pursuing secondary research for specific clinical conditions. Objective This study focused on applying unsupervised machine learning techniques for traumatic brain injury (TBI), which is the leading cause of death and disability among children and adults aged less than 44 years. Specifically, we present a case study to demonstrate the feasibility and applicability of subspace clustering techniques for extracting patterns from data collected from TBI patients. Methods Data for this study were obtained from the Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment–Phase III (PROTECT III) trial, which included a cohort of 882 TBI patients. We applied subspace-clustering methods (density-based, cell-based, and clustering-oriented methods) to this data set and compared the performance of the different clustering methods. Results The analyses showed the following three clusters of laboratory physiological data: (1) international normalized ratio (INR), (2) INR, chloride, and creatinine, and (3) hemoglobin and hematocrit. While all subclustering algorithms had a reasonable accuracy in classifying patients by mortality status, the density-based algorithm had a higher F1 score and coverage. Conclusions Clustering approaches serve as an important step for phenotype definition and validation in clinical domains such as TBI, where patient and injury heterogeneity are among the major reasons for failure of clinical trials. The results from this study provide a foundation to develop scalable clustering algorithms for further research and validation.


Thorax ◽  
2020 ◽  
Vol 75 (8) ◽  
pp. 695-701 ◽  
Author(s):  
Sherif Gonem ◽  
Wim Janssens ◽  
Nilakash Das ◽  
Marko Topalovic

The past 5 years have seen an explosion of interest in the use of artificial intelligence (AI) and machine learning techniques in medicine. This has been driven by the development of deep neural networks (DNNs)—complex networks residing in silico but loosely modelled on the human brain—that can process complex input data such as a chest radiograph image and output a classification such as ‘normal’ or ‘abnormal’. DNNs are ‘trained’ using large banks of images or other input data that have been assigned the correct labels. DNNs have shown the potential to equal or even surpass the accuracy of human experts in pattern recognition tasks such as interpreting medical images or biosignals. Within respiratory medicine, the main applications of AI and machine learning thus far have been the interpretation of thoracic imaging, lung pathology slides and physiological data such as pulmonary function tests. This article surveys progress in this area over the past 5 years, as well as highlighting the current limitations of AI and machine learning and the potential for future developments.


2019 ◽  
Vol 36 (5) ◽  
pp. 281-286
Author(s):  
James Vassallo ◽  
Jason Smith

IntroductionA key principle in the effective management of major incidents is triage, the process of prioritising patients on the basis of their clinical acuity. In many countries including the UK, a two-stage approach to triage is practised, with primary triage at the scene followed by a more detailed assessment using a secondary triage process, the Triage Sort. To date, no studies have analysed the performance of the Triage Sort in the civilian setting. The primary aim of this study was to determine the performance of the Triage Sort at predicting the need for life-saving intervention (LSI).MethodsUsing the Trauma Audit Research Network (TARN) database for all adult patients (>18 years) between 2006 and 2014, we determined which patients received one or more LSIs using a previously defined list. The first recorded hospital physiology was used to categorise patient priority using the Triage Sort, National Ambulance Resilience Unit (NARU) Sieve and the Modified Physiological Triage Tool-24 (MPTT-24). Performance characteristics were evaluated using sensitivity and specificity with statistical analysis using a McNemar’s test.Results127 233patients (58.1%) had complete data and were included: 55.6% men, aged 61.4 (IQR 43.1–80.0 years), ISS 9 (IQR 9–16), with 24 791 (19.5%) receiving at least one LSI (priority 1). The Triage Sort demonstrated the lowest accuracy of all triage tools at identifying the need for LSI (sensitivity 15.7% (95% CI 15.2 to 16.2) correlating with the highest rate of under-triage (84.3% (95% CI 83.8 to 84.8), but it had the greatest specificity (98.7% (95% CI 98.6 to 98.8).ConclusionWithin a civilian trauma registry population, the Triage Sort demonstrated the poorest performance at identifying patients in need of LSI. Its use as a secondary triage tool should be reviewed, with an urgent need for further research to determine the optimum method of secondary triage.


2020 ◽  
Author(s):  
Sina Ehsani ◽  
Chandan K Reddy ◽  
Brandon Foreman ◽  
Jonathan Ratcliff ◽  
Vignesh Subbian

BACKGROUND With advances in digital health technologies and proliferation of biomedical data in recent years, applications of machine learning in health care and medicine have gained considerable attention. While inpatient settings are equipped to generate rich clinical data from patients, there is a dearth of actionable information that can be used for pursuing secondary research for specific clinical conditions. OBJECTIVE This study focused on applying unsupervised machine learning techniques for traumatic brain injury (TBI), which is the leading cause of death and disability among children and adults aged less than 44 years. Specifically, we present a case study to demonstrate the feasibility and applicability of subspace clustering techniques for extracting patterns from data collected from TBI patients. METHODS Data for this study were obtained from the Progesterone for Traumatic Brain Injury, Experimental Clinical Treatment–Phase III (PROTECT III) trial, which included a cohort of 882 TBI patients. We applied subspace-clustering methods (density-based, cell-based, and clustering-oriented methods) to this data set and compared the performance of the different clustering methods. RESULTS The analyses showed the following three clusters of laboratory physiological data: (1) international normalized ratio (INR), (2) INR, chloride, and creatinine, and (3) hemoglobin and hematocrit. While all subclustering algorithms had a reasonable accuracy in classifying patients by mortality status, the density-based algorithm had a higher F1 score and coverage. CONCLUSIONS Clustering approaches serve as an important step for phenotype definition and validation in clinical domains such as TBI, where patient and injury heterogeneity are among the major reasons for failure of clinical trials. The results from this study provide a foundation to develop scalable clustering algorithms for further research and validation.


2019 ◽  
Vol 14 (6) ◽  
pp. 670-690 ◽  
Author(s):  
Ajeet Singh ◽  
Anurag Jain

Credit card fraud is one of the flip sides of the digital world, where transactions are made without the knowledge of the genuine user. Based on the study of various papers published between 1994 and 2018 on credit card fraud, the following objectives are achieved: the various types of credit card frauds has identified and to detect automatically these frauds, an adaptive machine learning techniques (AMLTs) has studied and also their pros and cons has summarized. The various dataset are used in the literature has studied and categorized into the real and synthesized datasets.The performance matrices and evaluation criteria have summarized which has used to evaluate the fraud detection system.This study has also covered the deep analysis and comparison of the performance (i.e sensitivity, specificity, and accuracy) of existing machine learning techniques in the credit card fraud detection area.The findings of this study clearly show that supervised learning, card-not-present fraud, skimming fraud, and website cloning method has been used more frequently.This Study helps to new researchers by discussing the limitation of existing fraud detection techniques and providing helpful directions of research in the credit card fraud detection field.


Sign in / Sign up

Export Citation Format

Share Document