scholarly journals Statistical Outlier Curation Kernel Software (SOCKS): A Modern, Efficient Outlier Detection and Curation Suite

Author(s):  
Prasanta Pal ◽  
Remko Van Lutterveld ◽  
Nancy Quirós ◽  
Veronique Taylor ◽  
Judson Brewer

Real world signal acquisition through sensors, is at the heart of modern digital revolution. However, almost every signal acquisition systems are contaminated with noise and outliers. Precise detec- tion, and curation of data is an essential step to reveal the true-nature of the uncorrupted observations. With the exploding volumes of digital data sources, there is a critical need for a robust but easy-to-operate, low-latency, generic yet highly customizable, outlier- detection and curation tool, easily accessible, adaptable to diverse types of data sources. Existing methods often boil down to data smoothing that inherently cause valuable information loss. We have developed a C++ based, software tool to decontaminate time- series and matrix like data sources, with the goal of recovering the ground-truth. The SOCKS tool would be made available as an open-source software for broader adoption in the scientific community. Our work calls for a philosophical shift in the design pipelines of real- world data processing. We propose, raw data should be decontaminated first, through conditional flagging of outliers, curation of flagged points, followed by iterative, parametrically tuned, asymptotic converge to the ground-truth as accurately as possible, before performing traditional data processing tasks.

2021 ◽  
Author(s):  
Prasanta Pal ◽  
Remko Van Lutterveld ◽  
Nancy Quirós ◽  
Veronique Taylor ◽  
Judson Brewer

Real world signal acquisition through sensors, is at the heart of modern digital revolution. However, almost every signal acquisition systems are contaminated with noise and outliers. Precise detection, and curation of data is an essential step to reveal the true-nature of the uncorrupted observations. With the exploding volumes of digital data sources, there is a critical need for a robust but easy-to-operate, low-latency, generic yet highly customizable, outlier detection and curation tool, easily accessible, adaptable to diverse types of data sources. Existing methods often boil down to data smoothing that inherently cause valuable information loss. We have developed a C++ based, software tool to decontaminate time- series and matrix like data sources, with the goal of recovering the ground-truth. The SOCKS tool would be made available as an open-source software for broader adoption in the scientific community. Our work calls for a philosophical shift in the design pipelines of real- world data processing. We propose, raw data should be decontaminated first, through conditional flagging of outliers, curation of flagged points, followed by iterative, parametrically tuned, asymptotic converge to the ground-truth as accurately as possible, before performing traditional data processing tasks.


2021 ◽  
Author(s):  
Prasanta Pal ◽  
Remko Van Lutterveld ◽  
Nancy Quirós ◽  
Veronique Taylor ◽  
Judson Brewer

Real world signal acquisition through sensors, is at the heart of modern digital revolution. However, almost every signal acquisition systems are contaminated with noise and outliers. Precise detec- tion, and curation of data is an essential step to reveal the true-nature of the uncorrupted observations. With the exploding volumes of digital data sources, there is a critical need for a robust but easy-to-operate, low-latency, generic yet highly customizable, outlier- detection and curation tool, easily accessible, adaptable to diverse types of data sources. Existing methods often boil down to data smoothing that inherently cause valuable information loss. We have developed a C++ based, software tool to decontaminate time- series and matrix like data sources, with the goal of recovering the ground-truth. The SOCKS tool would be made available as an open-source software for broader adoption in the scientific community. Our work calls for a philosophical shift in the design pipelines of real- world data processing. We propose, raw data should be decontaminated first, through conditional flagging of outliers, curation of flagged points, followed by iterative, parametrically tuned, asymptotic converge to the ground-truth as accurately as possible, before performing traditional data processing tasks.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Hyunah Shin ◽  
Suehyun Lee

Abstract Background Adverse drug reactions (ADRs) are regarded as a major cause of death and a major contributor to public health costs. For the active surveillance of drug safety, the use of real-world data and real-world evidence as part of the overall pharmacovigilance process is important. In this regard, many studies apply the data-driven approaches to support pharmacovigilance. We developed a pharmacovigilance data-processing pipeline (PDP) that utilized electronic health records (EHR) and spontaneous reporting system (SRS) data to explore pharmacovigilance signals. Methods To this end, we integrated two medical data sources: Konyang University Hospital (KYUH) EHR and the United States Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS). As part of the presented PDP, we converted EHR data on the Observation Medical Outcomes Partnership (OMOP) data model. To evaluate the ability of using the proposed PDP for pharmacovigilance purposes, we performed a statistical validation using drugs that induce ear disorders. Results To validate the presented PDP, we extracted six drugs from the EHR that were significantly involved in ADRs causing ear disorders: nortriptyline, (hazard ratio [HR] 8.06, 95% CI 2.41–26.91); metoclopramide (HR 3.35, 95% CI 3.01–3.74); doxycycline (HR 1.73, 95% CI 1.14–2.62); digoxin (HR 1.60, 95% CI 1.08–2.38); acetaminophen (HR 1.59, 95% CI 1.47–1.72); and sucralfate (HR 1.21, 95% CI 1.06–1.38). In FAERS, the strongest associations were found for nortriptyline (reporting odds ratio [ROR] 1.94, 95% CI 1.73–2.16), sucralfate (ROR 1.22, 95% CI 1.01–1.45), doxycycline (ROR 1.30, 95% CI 1.20–1.40), and hydroxyzine (ROR 1.17, 95% CI 1.06–1.29). We confirmed the results in a meta-analysis using random and fixed models for doxycycline, hydroxyzine, metoclopramide, nortriptyline, and sucralfate. Conclusions The proposed PDP could support active surveillance and the strengthening of potential ADR signals via real-world data sources. In addition, the PDP was able to generate real-world evidence for drug safety.


2018 ◽  
Vol 21 ◽  
pp. S475
Author(s):  
S. Mokiou ◽  
Z. Hakimi ◽  
J. Wang-Silvanto ◽  
S. Horsburgh ◽  
S. Chadda

2015 ◽  
Author(s):  
Martin G. Skjjveland ◽  
Martin Giese ◽  
Dag Hovland ◽  
Espen H. Lian ◽  
Arild Waaler

2015 ◽  
Vol 18 (3) ◽  
pp. A20
Author(s):  
M. Gavaghan ◽  
S. Armstrong ◽  
C. Taggart ◽  
S. Garfield

2021 ◽  
Vol 4 ◽  
Author(s):  
Bradley Butcher ◽  
Vincent S. Huang ◽  
Christopher Robinson ◽  
Jeremy Reffin ◽  
Sema K. Sgaier ◽  
...  

Developing data-driven solutions that address real-world problems requires understanding of these problems’ causes and how their interaction affects the outcome–often with only observational data. Causal Bayesian Networks (BN) have been proposed as a powerful method for discovering and representing the causal relationships from observational data as a Directed Acyclic Graph (DAG). BNs could be especially useful for research in global health in Lower and Middle Income Countries, where there is an increasing abundance of observational data that could be harnessed for policy making, program evaluation, and intervention design. However, BNs have not been widely adopted by global health professionals, and in real-world applications, confidence in the results of BNs generally remains inadequate. This is partially due to the inability to validate against some ground truth, as the true DAG is not available. This is especially problematic if a learned DAG conflicts with pre-existing domain doctrine. Here we conceptualize and demonstrate an idea of a “Causal Datasheet” that could approximate and document BN performance expectations for a given dataset, aiming to provide confidence and sample size requirements to practitioners. To generate results for such a Causal Datasheet, a tool was developed which can generate synthetic Bayesian networks and their associated synthetic datasets to mimic real-world datasets. The results given by well-known structure learning algorithms and a novel implementation of the OrderMCMC method using the Quotient Normalized Maximum Likelihood score were recorded. These results were used to populate the Causal Datasheet, and recommendations could be made dependent on whether expected performance met user-defined thresholds. We present our experience in the creation of Causal Datasheets to aid analysis decisions at different stages of the research process. First, one was deployed to help determine the appropriate sample size of a planned study of sexual and reproductive health in Madhya Pradesh, India. Second, a datasheet was created to estimate the performance of an existing maternal health survey we conducted in Uttar Pradesh, India. Third, we validated generated performance estimates and investigated current limitations on the well-known ALARM dataset. Our experience demonstrates the utility of the Causal Datasheet, which can help global health practitioners gain more confidence when applying BNs.


Sign in / Sign up

Export Citation Format

Share Document