Challenges and performance metrics for security operations center analysts: a systematic review

2019 ◽  
Vol 4 (3) ◽  
pp. 125-152 ◽  
Author(s):  
Enoch Agyepong ◽  
Yulia Cherdantseva ◽  
Philipp Reinecke ◽  
Pete Burnap
2019 ◽  
Author(s):  
Hao Sen Andrew Fang ◽  
Wan Tin Lim ◽  
Balakrishnan Tharmmambal

Abstract Background Early warning scores (EWS) have been developed as clinical prognostication tools to identify acutely deteriorating patients. With recent advancements in machine learning, there has been a proliferation of studies that describe the development and validation of novel EWS. Systematic reviews of published studies which focus on evaluating performance of both well-established and novel EWS have shown conflicting conclusions. A possible reason for this is the lack of consistency in the validation methods used. In this review, we aim to examine the methodologies and performance metrics used in studies which describe EWS validation.Methods A systematic review of all eligible studies in the MEDLINE database from inception to 22-Feb-2019 was performed. Studies were eligible if they performed validation on at least one EWS and reported associations between EWS scores and mortality, intensive care unit (ICU) transfers, or cardiac arrest (CA) of adults within the inpatient setting. Two reviewers independently did a full-text review and performed data abstraction by using standardized data-worksheet based on the TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) checklist. Meta-analysis was not performed due to heterogeneity.Results The key differences in validation methodologies identified were (1) validation population characteristics, (2) outcomes of interest, (3) case definition, intended time of use and aggregation methods, and (4) handling of missing values in the validation dataset. In terms of case definition, among the 34 eligible studies, 22 used the patient episode case definition while 10 used the observation set case definition, and 2 did the validation using both case definitions. Of those that used the patient episode case definition, 11 studies used a single point of time score to validate the EWS, most of which used the first recorded observation. There were also more than 10 different performance metrics reported among the studies.Conclusions Methodologies and performance metrics used in studies performing validation on EWS were not consistent hence making it difficult to interpret and compare EWS performance. Standardizing EWS validation methodology and reporting can potentially address this issue.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 968 ◽  
Author(s):  
Tendai Rukasha ◽  
Sandra I Woolley ◽  
Theocharis Kyriacou ◽  
Tim Collins

Epilepsy is a neurological disorder that affects 50 million people worldwide. It is characterised by seizures that can vary in presentation, from short absences to protracted convulsions. Wearable electronic devices that detect seizures have the potential to hail timely assistance for individuals, inform their treatment, and assist care and self-management. This systematic review encompasses the literature relevant to the evaluation of wearable electronics for epilepsy. Devices and performance metrics are identified, and the evaluations, both quantitative and qualitative, are presented. Twelve primary studies comprising quantitative evaluations from 510 patients and participants were collated according to preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines. Two studies (with 104 patients/participants) comprised both qualitative and quantitative evaluation components. Despite many works in the literature proposing and evaluating novel and incremental approaches to seizure detection, there is a lack of studies evaluating the devices available to consumers and researchers, and there is much scope for more complete evaluation data in quantitative studies. There is also scope for further qualitative evaluations amongst individuals, carers, and healthcare professionals regarding their use, experiences, and opinions of these devices.


2020 ◽  
Author(s):  
Hao Sen Andrew Fang ◽  
Wan Tin Lim ◽  
Balakrishnan Tharmmambal

Abstract Background Early warning scores (EWS) have been developed as clinical prognostication tools to identify acutely deteriorating patients. With recent advancements in machine learning, there has been a proliferation of studies that describe the development and validation of novel EWS. Systematic reviews of published studies which focus on evaluating performance of both well-established and novel EWS have shown conflicting conclusions. A possible reason for this is the lack of consistency in the validation methods used. In this review, we aim to examine the methodologies and performance metrics used in studies which describe EWS validation. Methods A systematic review of all eligible studies in the MEDLINE database from inception to 22-Feb-2019 was performed. Studies were eligible if they performed validation on at least one EWS and reported associations between EWS scores and mortality, intensive care unit (ICU) transfers, or cardiac arrest (CA) of adults within the inpatient setting. Two reviewers independently did a full-text review and performed data abstraction by using standardized data-worksheet based on the TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) checklist. Meta-analysis was not performed due to heterogeneity. Results The key differences in validation methodologies identified were (1) validation population characteristics, (2) outcomes of interest, (3) case definition, intended time of use and aggregation methods, and (4) handling of missing values in the validation dataset. In terms of case definition, among the 34 eligible studies, 22 used the patient episode case definition while 10 used the observation set case definition, and 2 did the validation using both case definitions. Of those that used the patient episode case definition, 11 studies used a single point of time score to validate the EWS, most of which used the first recorded observation. There were also more than 10 different performance metrics reported among the studies. Conclusions Methodologies and performance metrics used in studies performing validation on EWS were not consistent hence making it difficult to interpret and compare EWS performance. Standardizing EWS validation methodology and reporting can potentially address this issue.


Nature Energy ◽  
2021 ◽  
Author(s):  
Yanxin Yao ◽  
Jiafeng Lei ◽  
Yang Shi ◽  
Fei Ai ◽  
Yi-Chun Lu

2021 ◽  
Vol 28 (1) ◽  
pp. e100262
Author(s):  
Mustafa Khanbhai ◽  
Patrick Anyadi ◽  
Joshua Symons ◽  
Kelsey Flott ◽  
Ara Darzi ◽  
...  

ObjectivesUnstructured free-text patient feedback contains rich information, and analysing these data manually would require a lot of personnel resources which are not available in most healthcare organisations.To undertake a systematic review of the literature on the use of natural language processing (NLP) and machine learning (ML) to process and analyse free-text patient experience data.MethodsDatabases were systematically searched to identify articles published between January 2000 and December 2019 examining NLP to analyse free-text patient feedback. Due to the heterogeneous nature of the studies, a narrative synthesis was deemed most appropriate. Data related to the study purpose, corpus, methodology, performance metrics and indicators of quality were recorded.ResultsNineteen articles were included. The majority (80%) of studies applied language analysis techniques on patient feedback from social media sites (unsolicited) followed by structured surveys (solicited). Supervised learning was frequently used (n=9), followed by unsupervised (n=6) and semisupervised (n=3). Comments extracted from social media were analysed using an unsupervised approach, and free-text comments held within structured surveys were analysed using a supervised approach. Reported performance metrics included the precision, recall and F-measure, with support vector machine and Naïve Bayes being the best performing ML classifiers.ConclusionNLP and ML have emerged as an important tool for processing unstructured free text. Both supervised and unsupervised approaches have their role depending on the data source. With the advancement of data analysis tools, these techniques may be useful to healthcare organisations to generate insight from the volumes of unstructured free-text data.


Author(s):  
Raaj Kishore Biswas ◽  
Rena Friswell ◽  
Jake Olivier ◽  
Ann Williamson ◽  
Teresa Senserrick

Sign in / Sign up

Export Citation Format

Share Document