scholarly journals Sepsis in the era of data-driven medicine: personalizing risks, diagnoses, treatments and prognoses

2019 ◽  
Vol 21 (4) ◽  
pp. 1182-1195
Author(s):  
Andrew C Liu ◽  
Krishna Patel ◽  
Ramya Dhatri Vunikili ◽  
Kipp W Johnson ◽  
Fahad Abdu ◽  
...  

Abstract Sepsis is a series of clinical syndromes caused by the immunological response to infection. The clinical evidence for sepsis could typically attribute to bacterial infection or bacterial endotoxins, but infections due to viruses, fungi or parasites could also lead to sepsis. Regardless of the etiology, rapid clinical deterioration, prolonged stay in intensive care units and high risk for mortality correlate with the incidence of sepsis. Despite its prevalence and morbidity, improvement in sepsis outcomes has remained limited. In this comprehensive review, we summarize the current landscape of risk estimation, diagnosis, treatment and prognosis strategies in the setting of sepsis and discuss future challenges. We argue that the advent of modern technologies such as in-depth molecular profiling, biomedical big data and machine intelligence methods will augment the treatment and prevention of sepsis. The volume, variety, veracity and velocity of heterogeneous data generated as part of healthcare delivery and recent advances in biotechnology-driven therapeutics and companion diagnostics may provide a new wave of approaches to identify the most at-risk sepsis patients and reduce the symptom burden in patients within shorter turnaround times. Developing novel therapies by leveraging modern drug discovery strategies including computational drug repositioning, cell and gene-therapy, clustered regularly interspaced short palindromic repeats -based genetic editing systems, immunotherapy, microbiome restoration, nanomaterial-based therapy and phage therapy may help to develop treatments to target sepsis. We also provide empirical evidence for potential new sepsis targets including FER and STARD3NL. Implementing data-driven methods that use real-time collection and analysis of clinical variables to trace, track and treat sepsis-related adverse outcomes will be key. Understanding the root and route of sepsis and its comorbid conditions that complicate treatment outcomes and lead to organ dysfunction may help to facilitate identification of most at-risk patients and prevent further deterioration. To conclude, leveraging the advances in precision medicine, biomedical data science and translational bioinformatics approaches may help to develop better strategies to diagnose and treat sepsis in the next decade.

2021 ◽  
pp. 251604352199026
Author(s):  
Peter Isherwood ◽  
Patrick Waterson

Patient safety, staff moral and system performance are at the heart of healthcare delivery. Investigation of adverse outcomes is one strategy that enables organisations to learn and improve. Healthcare is now understood as a complex, possibly the most complex, socio-technological system. Despite this the use of a 20th century linear investigation model is still recommended for the investigation of adverse outcomes. In this review the authors use data gathered from the investigation of a real life healthcare near incident and apply three different methodologies to the analysis of this data. They compare both the methodologies themselves and the outputs generated. This illustrates how different methodologies generate different system level recommendations. The authors conclude that system based models generate the strongest barriers to improve future performance. Healthcare providers and their regulatory bodies need to embrace system based methodologies if they are to effectively learn from, and reduce future, adverse outcomes.


2021 ◽  
Author(s):  
Pavlos Tsantilas ◽  
Shen Lao ◽  
Zhiyuan Wu ◽  
Anne Eberhard ◽  
Greg Winski ◽  
...  

Abstract Aims  Atherosclerotic cerebrovascular disease underlies the majority of ischaemic strokes and is a major cause of death and disability. While plaque burden is a predictor of adverse outcomes, plaque vulnerability is increasingly recognized as a driver of lesion rupture and risk for clinical events. Defining the molecular regulators of carotid instability could inform the development of new biomarkers and/or translational targets for at-risk individuals. Methods and results  Using two independent human endarterectomy biobanks, we found that the understudied glycoprotein, chitinase 3 like 1 (CHI3L1), is up-regulated in patients with carotid disease compared to healthy controls. Further, CHI3L1 levels were found to stratify individuals based on symptomatology and histopathological evidence of an unstable fibrous cap. Gain- and loss-of-function studies in cultured human carotid artery smooth muscle cells (SMCs) showed that CHI3L1 prevents a number of maladaptive changes in that cell type, including phenotype switching towards a synthetic and hyperproliferative state. Using two murine models of carotid remodelling and lesion vulnerability, we found that knockdown of Chil1 resulted in larger neointimal lesions comprised by de-differentiated SMCs that failed to invest within and stabilize the fibrous cap. Exploratory mechanistic studies identified alterations in potential downstream regulatory genes, including large tumour suppressor kinase 2 (LATS2), which mediates macrophage marker and inflammatory cytokine expression on SMCs, and may explain how CHI3L1 modulates cellular plasticity. Conclusion  CHI3L1 is up-regulated in humans with carotid artery disease and appears to be a strong mediator of plaque vulnerability. Mechanistic studies suggest this change may be a context-dependent adaptive response meant to maintain vascular SMCs in a differentiated state and to prevent rupture of the fibrous cap. Part of this effect may be mediated through downstream suppression of LATS2. Future studies should determine how these changes occur at the molecular level, and whether this gene can be targeted as a novel translational therapy for subjects at risk of stroke.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mario Zanfardino ◽  
Rossana Castaldo ◽  
Katia Pane ◽  
Ornella Affinito ◽  
Marco Aiello ◽  
...  

AbstractAnalysis of large-scale omics data along with biomedical images has gaining a huge interest in predicting phenotypic conditions towards personalized medicine. Multiple layers of investigations such as genomics, transcriptomics and proteomics, have led to high dimensionality and heterogeneity of data. Multi-omics data integration can provide meaningful contribution to early diagnosis and an accurate estimate of prognosis and treatment in cancer. Some multi-layer data structures have been developed to integrate multi-omics biological information, but none of these has been developed and evaluated to include radiomic data. We proposed to use MultiAssayExperiment (MAE) as an integrated data structure to combine multi-omics data facilitating the exploration of heterogeneous data. We improved the usability of the MAE, developing a Multi-omics Statistical Approaches (MuSA) tool that uses a Shiny graphical user interface, able to simplify the management and the analysis of radiogenomic datasets. The capabilities of MuSA were shown using public breast cancer datasets from TCGA-TCIA databases. MuSA architecture is modular and can be divided in Pre-processing and Downstream analysis. The pre-processing section allows data filtering and normalization. The downstream analysis section contains modules for data science such as correlation, clustering (i.e., heatmap) and feature selection methods. The results are dynamically shown in MuSA. MuSA tool provides an easy-to-use way to create, manage and analyze radiogenomic data. The application is specifically designed to guide no-programmer researchers through different computational steps. Integration analysis is implemented in a modular structure, making MuSA an easily expansible open-source software.


Heart ◽  
2021 ◽  
Vol 107 (5) ◽  
pp. 366-372
Author(s):  
Donya Mohebali ◽  
Michelle M Kittleson

The incidence of heart failure (HF) remains high and patients with HF are at risk for frequent hospitalisations. Remote monitoring technologies may provide early indications of HF decompensation and potentially allow for optimisation of therapy to prevent HF hospitalisations. The need for reliable remote monitoring technology has never been greater as the COVID-19 pandemic has led to the rapid expansion of a new mode of healthcare delivery: the virtual visit. With the convergence of remote monitoring technologies and reliable method of remote healthcare delivery, an understanding of the role of both in the management of patients with HF is critical. In this review, we outline the evidence on current remote monitoring technologies in patients with HF and highlight how these advances may benefit patients in the context of the current pandemic.


2021 ◽  
Vol 11 (7) ◽  
pp. 3110
Author(s):  
Karina Gibert ◽  
Xavier Angerri

In this paper, the results of the project INSESS-COVID19 are presented, as part of a special call owing to help in the COVID19 crisis in Catalonia. The technological infrastructure and methodology developed in this project allows the quick screening of a territory for a quick a reliable diagnosis in front of an unexpected situation by providing relevant decisional information to support informed decision-making and strategy and policy design. One of the challenges of the project was to extract valuable information from direct participatory processes where specific target profiles of citizens are consulted and to distribute the participation along the whole territory. Having a lot of variables with a moderate number of citizens involved (in this case about 1000) implies the risk of violating statistical secrecy when multivariate relationships are analyzed, thus putting in risk the anonymity of the participants as well as their safety when vulnerable populations are involved, as is the case of INSESS-COVID19. In this paper, the entire data-driven methodology developed in the project is presented and the dealing of the small subgroups of population for statistical secrecy preserving described. The methodology is reusable with any other underlying questionnaire as the data science and reporting parts are totally automatized.


2021 ◽  
pp. 1-9
Author(s):  
Nieves L. González González ◽  
Enrique González Dávila ◽  
Agustina González Martín ◽  
Erika Padrón ◽  
José Ángel García Hernández

<b><i>Objective:</i></b> The aim of the study was to determine if customized fetal growth charts developed excluding obese and underweight mothers (CC<sub>(18.5–25)</sub>) are better than customized curves (CC) at identifying pregnancies at risk of perinatal morbidity. <b><i>Material and Methods:</i></b> Data from 20,331 infants were used to construct CC and from 11,604 for CC<sub>(18.5–25)</sub>, after excluding the cases with abnormal maternal BMI. The 2 models were applied to 27,507 newborns and the perinatal outcomes were compared between large for gestational age (LGA) or small for gestational age (SGA) according to each model. Logistic regression was used to calculate the OR of outcomes by the group, with gestational age (GA) as covariable. The confidence intervals of pH were calculated by analysis of covariance. <b><i>Results:</i></b> The rate of cesarean and cephalopelvic disproportion (CPD) were higher in LGA<sub>only by CC</sub><sub><sub>(18.5−25)</sub></sub> than in LGA<sub>only by CC</sub>. In SGA<sub>only by CC</sub><sub><sub>(18.5−25)</sub></sub>, neonatal intensive care unit (NICU) and perinatal mortality rates were higher than in SGA<sub>only by CC</sub>. Adverse outcomes rate was higher in LGA<sub>only by CC</sub><sub><sub>(18.5−25)</sub></sub> than in LGA<sub>only by CC</sub> (21.6%; OR = 1.61, [1.34–193]) vs. (13.5%; OR = 0.84, [0.66–1.07]), and in SGA <sub>only by CC</sub><sub><sub>(18.5−25)</sub></sub> than in SGA<sub>only by CC</sub> (9.6%; OR = 1.62, [1.25–2.10] vs. 6.3%; OR = 1.18, [0.85–1.66]). <b><i>Conclusion:</i></b> The use of CC<sub>(18.5–25)</sub> allows a more accurate identification of LGA and SGA infants at risk of perinatal morbidity than conventional CC. This benefit increase and decrease, respectively, with GA.


2021 ◽  
Author(s):  
MUTHU RAM ELENCHEZHIAN ◽  
VAMSEE VADLAMUDI ◽  
RASSEL RAIHAN ◽  
KENNETH REIFSNIDER

Our community has a widespread knowledge on the damage tolerance and durability of the composites, developed over the past few decades by various experimental and computational efforts. Several methods have been used to understand the damage behavior and henceforth predict the material states such as residual strength (damage tolerance) and life (durability) of these material systems. Electrochemical Impedance Spectroscopy (EIS) and Broadband Dielectric Spectroscopy (BbDS) are such methods, which have been proven to identify the damage states in composites. Our previous work using BbDS method has proven to serve as precursor to identify the damage levels, indicating the beginning of end of life of the material. As a change in the material state variable is triggered by damage development, the rate of change of these states indicates the rate of damage interaction and can effectively predict impending failure. The Data-Driven Discovery of Models (D3M) [1] aims to develop model discovery systems, enabling users with domain knowledge but no data science background to create empirical models of real, complex processes. These D3M methods have been developed severely over the years in various applications and their implementation on real-time prediction for complex parameters such as material states in composites need to be trusted based on physics and domain knowledge. In this research work, we propose the use of data-driven methods combined with BbDS and progressive damage analysis to identify and hence predict material states in composites, subjected to fatigue loads.


2021 ◽  
Author(s):  
Karen Triep ◽  
Alexander Benedikt Leichtle ◽  
Martin Meister ◽  
Georg Martin Fiedler ◽  
Olga Endrich

BACKGROUND The criteria for the diagnosis of kidney disease outlined in “The Kidney Disease: Improving Global Outcomes (KDIGO)” are based on a patient’s current, historical and baseline data. The diagnosis of acute (AKI), chronic (CKD) and acute-on-chronic kidney disease requires past measurements of creatinine and back-calculation and the interpretation of several laboratory values over a certain period. Diagnosis may be hindered by unclear definition of the individual creatinine baseline and rough ranges of norm values set without adjustment for age, ethnicity, comorbidities and treatment. Classification of the correct diagnosis and the sufficient staging improves coding, data quality, reimbursement, the choice of therapeutic approach and the patient’s outcome. OBJECTIVE With the help of a complex rule-engine a data-driven approach to assign the diagnoses acute, chronic and acute-on-chronic kidney disease is applied. METHODS Real-time and retrospective data from the hospital’s Clinical Data Warehouse of in- and outpatient cases treated between 2014 – 2019 is used. Delta serum creatinine, baseline values and admission and discharge data are analyzed. A KDIGO based standard query language (SQL) algorithm applies specific diagnosis (ICD) codes to inpatient stays. To measure the effect on diagnosis, Text Mining on discharge documentation is conducted. RESULTS We show that this approach yields an increased number of diagnoses as well as higher precision in documentation and coding (unspecific diagnosis ICD N19* coded in % of N19 generated 17.8 in 2016, 3.3 in 2019). CONCLUSIONS Our data-driven method supports the process and reliability of diagnosis and staging and improves the quality of documentation and data. Measuring patients’ outcome will be the next step of the project.


2021 ◽  
pp. 026638212110619
Author(s):  
Sharon Richardson

During the past two decades, there have been a number of breakthroughs in the fields of data science and artificial intelligence, made possible by advanced machine learning algorithms trained through access to massive volumes of data. However, their adoption and use in real-world applications remains a challenge. This paper posits that a key limitation in making AI applicable has been a failure to modernise the theoretical frameworks needed to evaluate and adopt outcomes. Such a need was anticipated with the arrival of the digital computer in the 1950s but has remained unrealised. This paper reviews how the field of data science emerged and led to rapid breakthroughs in algorithms underpinning research into artificial intelligence. It then discusses the contextual framework now needed to advance the use of AI in real-world decisions that impact human lives and livelihoods.


Sign in / Sign up

Export Citation Format

Share Document