scholarly journals Finding Structure in Time: Visualizing and Analyzing Behavioral Time Series

2019 ◽  
Author(s):  
Tian Linger Xu ◽  
Kaya de Barbaro ◽  
Drew Abney ◽  
Ralf Cox

The temporal structure of behavior contains a rich source of information about its dynamic organization, origins, and development. Today, advances in sensing and data storage allow researchers to collect multiple dimensions of behavioral data at a fine temporal scale both in and out of the laboratory, leading to the curation of massive multimodal corpora of behavior. However, along with these new opportunities come new challenges. Theories are often underspecified as to the exact nature of these unfolding interactions, and psychologists have limited ready-to-use methods and training for quantifying structures and patterns in behavioral time series. In this paper, we will introduce four techniques to interpret and analyze high-density multi-modal behavior data, namely, to: (1) visualize the raw time series, (2) describe the overall distributional structure of temporal events (Burstiness calculation), (3) characterize the nonlinear dynamics over multiple timescales with Chromatic and Anisotropic Cross-Recurrence Quantification Analysis (CRQA), (4) and quantify the directional relations among a set of interdependent multimodal behavioral variables with Granger Causality. Each technique is introduced in a module with conceptual background, sample data drawn from empirical studies and ready-to-use Matlab scripts. The code modules showcase each technique’s application with detailed documentation to allow more advanced users to adapt them to their own datasets. Additionally, to make our modules more accessible to beginner programmers, we provide a “Programming Basics” module that introduces common functions for working with behavioral timeseries data in Matlab. Together, the materials provide a practical introduction to a range of analyses that psychologists can use to discover temporal structure in high-density behavioral data.

2003 ◽  
Author(s):  
Guofan Jin ◽  
Liangcai Cao ◽  
Qingsheng He ◽  
Haoyun Wei ◽  
Minxian Wu

1980 ◽  
Vol 13 (4) ◽  
pp. 543-559 ◽  
Author(s):  
Donald P. Hartmann ◽  
John M. Gottman ◽  
Richard R. Jones ◽  
William Gardner ◽  
Alan E. Kazdin ◽  
...  

2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 1.91% to 6.69%. <div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


2020 ◽  
Vol 86 (7) ◽  
pp. 431-441 ◽  
Author(s):  
Sébastien Giordano ◽  
Simon Bailly ◽  
Loic Landrieu ◽  
Nesrine Chehata

Leveraging the recent availability of accurate, frequent, and multimodal (radar and optical) Sentinel-1 and -2 acquisitions, this paper investigates the automation of land parcel identi- fication system (LPIS ) crop type classification. Our approach allows for the automatic integration of temporal knowledge, i.e., crop rotations using existing parcel-based land cover databases and multi-modal Sentinel-1 and -2 time series. The temporal evolution of crop types was modeled with a linear- chain conditional random field, trained with time series of multi-modal (radar and optical) satellite acquisitions and associated LPIS. Our model was tested on two study areas in France (≥ 1250 km2) which show different crop types, various parcel sizes, and agricultural practices: . the Seine et Marne and the Alpes de Haute-Provence classified accordingly to a fine national 25-class nomenclature. We first trained a Random Forest classifier without temporal structure to achieve 89.0% overall accuracy in Seine et Marne (10 classes) and 73% in Alpes de Haute-Provence (14 classes). We then demonstrated experimentally that taking into account the temporal structure of crop rotation with our model resulted in an increase of 3% to +5% in accuracy. This increase was especially important (+12%) for classes which were poorly classified without using the temporal structure. A stark posi- tive impact was also demonstrated on permanent crops, while it was fairly limited or even detrimental for annual crops.


Author(s):  
Nachiketa Chakraborty

With an explosion of data in the near future, from observatories spanning from radio to gamma-rays, we have entered the era of time domain astronomy. Historically, this field has been limited to modeling the temporal structure with time-series simulations limited to energy ranges blessed with excellent statistics as in X-rays. In addition to ever increasing volumes and variety of astronomical lightcurves, there's a plethora of different types of transients detected not only across the electromagnetic spectrum, but indeed across multiple messengers like counterparts for neutrino and gravitational wave sources. As a result, precise, fast forecasting and modeling the lightcurves or time-series will play a crucial role in both understanding the physical processes as well as coordinating multiwavelength and multimessenger campaigns. In this regard, deep learning algorithms such as recurrent neural networks (RNNs) should prove extremely powerful for forecasting as it has in several other domains. Here we test the performance of a very successful class of RNNs, the Long Short Term Memory (LSTM) algorithms with simulated lightcurves. We focus on univariate forecasting of types of lightcurves typically found in active galactic nuclei (AGN) observations. Specifically, we explore the sensitivity of training and test losses to key parameters of the LSTM network and data characteristics namely gaps and complexity measured in terms of number of Fourier components. We find that typically, the performances of LSTMs are better for pink or flicker noise type sources. The key parameters on which performance is dependent are batch size for LSTM and the gap percentage of the lightcurves. While a batch size of $10-30$ seems optimal, the most optimal test and train losses are under $10 \%$ of missing data for both periodic and random gaps in pink noise. The performance is far worse for red noise. This compromises detectability of transients. The performance gets monotonically worse for data complexity measured in terms of number of Fourier components which is especially relevant in the context of complicated quasi-periodic signals buried under noise. Thus, we show that time-series simulations are excellent guides for use of RNN-LSTMs in forecasting.


2005 ◽  
Vol 17 (9) ◽  
pp. 1123-1127 ◽  
Author(s):  
G. A. Shaw ◽  
J. S. Trethewey ◽  
A. D. Johnson ◽  
W. J. Drugan ◽  
W. C. Crone

2002 ◽  
Vol 748 ◽  
Author(s):  
Yoshiomi Hiranaga ◽  
Kenjiro Fujimoto ◽  
Yasuo Wagatsuma ◽  
Yasuo Cho ◽  
Atsushi Onoe ◽  
...  

ABSTRACTScanning Nonlinear Dielectric Microscopy (SNDM) is the method for observing ferroelectric polarization distribution, and now, its resolution has become to the sub-nanometer order, which is much higher than other scanning probe microscopy (SPM) methods for the same purpose. Up to now, we have studied high-density ferroelectric data storage using this microscopy. In this study, we have conducted fundamental experiments of nano-sized inverted domain formation in LiTaO3 single, and successfully formed inverted dot array with the density of 1.5 Tbit/inch2.


Sign in / Sign up

Export Citation Format

Share Document