scholarly journals An Information Theoretic Approach for Creating 3D Spatial Images from 4D Time Series Data

2017 ◽  
Vol 23 (S1) ◽  
pp. 100-101
Author(s):  
Willy Wriggers ◽  
Julio Kovacs ◽  
Federica Castellani ◽  
P. Thomas Vernier ◽  
Dean J. Krusienski
Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 566 ◽  
Author(s):  
Junning Deng ◽  
Jefrey Lijffijt ◽  
Bo Kang ◽  
Tijl De Bie

Numerical time series data are pervasive, originating from sources as diverse as wearable devices, medical equipment, to sensors in industrial plants. In many cases, time series contain interesting information in terms of subsequences that recur in approximate form, so-called motifs. Major open challenges in this area include how one can formalize the interestingness of such motifs and how the most interesting ones can be found. We introduce a novel approach that tackles these issues. We formalize the notion of such subsequence patterns in an intuitive manner and present an information-theoretic approach for quantifying their interestingness with respect to any prior expectation a user may have about the time series. The resulting interestingness measure is thus a subjective measure, enabling a user to find motifs that are truly interesting to them. Although finding the best motif appears computationally intractable, we develop relaxations and a branch-and-bound approach implemented in a constraint programming solver. As shown in experiments on synthetic data and two real-world datasets, this enables us to mine interesting patterns in small or mid-sized time series.


2021 ◽  
Author(s):  
Elizabeth Bradley ◽  
Michael Neuder ◽  
Joshua Garland ◽  
James White ◽  
Edward Dlugokencky

<p>  While it is tempting in experimental practice to seek as high a  data rate as possible, oversampling can become an issue if one takes measurements too densely.  These effects can take many  forms, some of which are easy to detect: e.g., when the data sequence contains multiple copies of the same measured value.  In other situations, as when there is mixing—in the measurement apparatus and/or the system itself—oversampling effects can be harder to detect.  We propose a novel, model-free technique to detect local mixing in time series using an information-theoretic technique called permutation entropy.  By varying the temporal resolution of the calculation and analyzing the patterns in the results, we can determine whether the data are mixed locally, and on what scale.  This can be used by practitioners to choose appropriate lower bounds on scales at which to measure or report data.  After validating this technique on several synthetic examples, we demonstrate its effectiveness on data from a chemistry experiment, methane records from Mauna Loa, and an Antarctic ice core.</p>


Author(s):  
Nicholas Hoernle ◽  
Kobi Gal ◽  
Barbara Grosz ◽  
Leilah Lyons ◽  
Ada Ren ◽  
...  

This paper describes methods for comparative evaluation of the interpretability of models of high dimensional time series data inferred by unsupervised machine learning algorithms. The time series data used in this investigation were logs from an immersive simulation like those commonly used in education and healthcare training. The structures learnt by the models provide representations of participants' activities in the simulation which are intended to be meaningful to people's interpretation. To choose the model that induces the best representation, we designed two interpretability tests, each of which evaluates the extent to which a model’s output aligns with people’s expectations or intuitions of what has occurred in the simulation. We compared the performance of the models on these interpretability tests to their performance on statistical information criteria. We show that the models that optimize interpretability quality differ from those that optimize (statistical) information theoretic criteria. Furthermore, we found that a model using a fully Bayesian approach performed well on both the statistical and human-interpretability measures. The Bayesian approach is a good candidate for fully automated model selection, i.e., when direct empirical investigations of interpretability are costly or infeasible.


Sign in / Sign up

Export Citation Format

Share Document