scholarly journals Building FAIR Functionality: Annotating Events in Time Series Data Using Hierarchical Event Descriptors (HED)

2021 ◽  
Author(s):  
Kay Robbins ◽  
Dung Truong ◽  
Alexander Jones ◽  
Ian Callanan ◽  
Scott Makeig

AbstractHuman electrophysiological and related time series data are often acquired in complex, event-rich environments. However, the resulting recorded brain or other dynamics are often interpreted in relation to more sparsely recorded or subsequently-noted events. Currently a substantial gap exists between the level of event description required by current digital data archiving standards and the level of annotation required for successful analysis of event-related data across studies, environments, and laboratories. Manifold challenges must be addressed, most prominently ontological clarity, vocabulary extensibility, annotation tool availability, and overall usability, to allow and promote sharing of data with an effective level of descriptive detail for labeled events. Motivating data authors to perform the work needed to adequately annotate their data is a key challenge. This paper describes new developments in the Hierarchical Event Descriptor (HED) system for addressing these issues. We recap the evolution of HED and its acceptance by the Brain Imaging Data Structure (BIDS) movement, describe the recent release of HED-3G, a third generation HED tools and design framework, and discuss directions for future development. Given consistent, sufficiently detailed, tool-enabled, field-relevant annotation of the nature of recorded events, prospects are bright for large-scale analysis and modeling of aggregated time series data, both in behavioral and brain imaging sciences and beyond.

2020 ◽  
Author(s):  
Kay Robbins ◽  
Dung Truong ◽  
Alexander Jones ◽  
Ian Callanan ◽  
Scott Makeig

In fields such as human electrophysiology, high-precision time series data is often acquired in complex, event-rich environments for interpretation of complex dynamic data features in the context of session events. However, a substantial gap exists between the level of event description information required by current digital research archive standards and the level of annotation required for successful meta-analysis or mega-analysis of event-related data across studies, systems, and laboratories. Manifold challenges, most prominently ontological clarity and extensibility, tool availability, and ease of use must be addressed to allow and promote sharing of data with an effective level of descriptive detail for labeled events. Motivating data authors to perform the work needed to adequately annotate their data is a key challenge. This paper describes the near decade-long development of the Hierarchical Event Descriptor (HED) system for addressing these issues. We discuss the evolution of HED, the lessons we have learned, the current status of HED vocabulary and tools, some generally-applicable design principles for annotation framework development, and a roadmap for future development. We believe that without consistent, sufficiently detailed and field-relevant annotations as to the nature of each recorded event, the potential value of data sharing and large-scale analysis in behavioral and brain imaging sciences will not be realized.


2021 ◽  
Author(s):  
Sadnan Al Manir ◽  
Justin Niestroy ◽  
Maxwell Adam Levinson ◽  
Timothy Clark

Introduction: Transparency of computation is a requirement for assessing the validity of computed results and research claims based upon them; and it is essential for access to, assessment, and reuse of computational components. These components may be subject to methodological or other challenges over time. While reference to archived software and/or data is increasingly common in publications, a single machine-interpretable, integrative representation of how results were derived, that supports defeasible reasoning, has been absent. Methods: We developed the Evidence Graph Ontology, EVI, in OWL 2, with a set of inference rules, to provide deep representations of supporting and challenging evidence for computations, services, software, data, and results, across arbitrarily deep networks of computations, in connected or fully distinct processes. EVI integrates FAIR practices on data and software, with important concepts from provenance models, and argumentation theory. It extends PROV for additional expressiveness, with support for defeasible reasoning. EVI treats any com- putational result or component of evidence as a defeasible assertion, supported by a DAG of the computations, software, data, and agents that produced it. Results: We have successfully deployed EVI for very-large-scale predictive analytics on clinical time-series data. Every result may reference its own evidence graph as metadata, which can be extended when subsequent computations are executed. Discussion: Evidence graphs support transparency and defeasible reasoning on results. They are first-class computational objects, and reference the datasets and software from which they are derived. They support fully transparent computation, with challenge and support propagation. The EVI approach may be extended to include instruments, animal models, and critical experimental reagents.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jing Zhao ◽  
Shubo Liu ◽  
Xingxing Xiong ◽  
Zhaohui Cai

Privacy protection is one of the major obstacles for data sharing. Time-series data have the characteristics of autocorrelation, continuity, and large scale. Current research on time-series data publication mainly ignores the correlation of time-series data and the lack of privacy protection. In this paper, we study the problem of correlated time-series data publication and propose a sliding window-based autocorrelation time-series data publication algorithm, called SW-ATS. Instead of using global sensitivity in the traditional differential privacy mechanisms, we proposed periodic sensitivity to provide a stronger degree of privacy guarantee. SW-ATS introduces a sliding window mechanism, with the correlation between the noise-adding sequence and the original time-series data guaranteed by sequence indistinguishability, to protect the privacy of the latest data. We prove that SW-ATS satisfies ε-differential privacy. Compared with the state-of-the-art algorithm, SW-ATS is superior in reducing the error rate of MAE which is about 25%, improving the utility of data, and providing stronger privacy protection.


Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 95 ◽  
Author(s):  
Johannes Stübinger ◽  
Katharina Adler

This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature.


2020 ◽  
Vol 496 (1) ◽  
pp. 629-637
Author(s):  
Ce Yu ◽  
Kun Li ◽  
Shanjiang Tang ◽  
Chao Sun ◽  
Bin Ma ◽  
...  

ABSTRACT Time series data of celestial objects are commonly used to study valuable and unexpected objects such as extrasolar planets and supernova in time domain astronomy. Due to the rapid growth of data volume, traditional manual methods are becoming extremely hard and infeasible for continuously analysing accumulated observation data. To meet such demands, we designed and implemented a special tool named AstroCatR that can efficiently and flexibly reconstruct time series data from large-scale astronomical catalogues. AstroCatR can load original catalogue data from Flexible Image Transport System (FITS) files or data bases, match each item to determine which object it belongs to, and finally produce time series data sets. To support the high-performance parallel processing of large-scale data sets, AstroCatR uses the extract-transform-load (ETL) pre-processing module to create sky zone files and balance the workload. The matching module uses the overlapped indexing method and an in-memory reference table to improve accuracy and performance. The output of AstroCatR can be stored in CSV files or be transformed other into formats as needed. Simultaneously, the module-based software architecture ensures the flexibility and scalability of AstroCatR. We evaluated AstroCatR with actual observation data from The three Antarctic Survey Telescopes (AST3). The experiments demonstrate that AstroCatR can efficiently and flexibly reconstruct all time series data by setting relevant parameters and configuration files. Furthermore, the tool is approximately 3× faster than methods using relational data base management systems at matching massive catalogues.


2013 ◽  
Author(s):  
Zaixian Xie ◽  
Matthew O. Ward ◽  
Elke A. Rundensteiner

Author(s):  
Trung Duy Pham ◽  
Dat Tran ◽  
Wanli Ma

In the biomedical and healthcare fields, the ownership protection of the outsourced data is becoming a challenging issue in sharing the data between data owners and data mining experts to extract hidden knowledge and patterns. Watermarking has been proved as a right-protection mechanism that provides detectable evidence for the legal ownership of a shared dataset, without compromising its usability under a wide range of data mining for digital data in different formats such as audio, video, image, relational database, text and software. Time series biomedical data such as Electroencephalography (EEG) or Electrocardiography (ECG) is valuable and costly in healthcare, which need to have owner protection when sharing or transmission in data mining application. However, this issue related to kind of data has only been investigated in little previous research as its characteristics and requirements. This paper proposes an optimized watermarking scheme to protect ownership for biomedical and healthcare systems in data mining. To achieve the highest possible robustness without losing watermark transparency, Particle Swarm Optimization (PSO) technique is used to optimize quantization steps to find a suitable one. Experimental results on EEG data show that the proposed scheme provides good imperceptibility and more robust against various signal processing techniques and common attacks such as noise addition, low-pass filtering, and re-sampling.


Sign in / Sign up

Export Citation Format

Share Document