Time Series Representation Learning with Contrastive Triplet Selection

2022 ◽  
Author(s):  
Yuan-Chi Chang ◽  
Dharmashankar Subramanian ◽  
Raju Pavuluri ◽  
Timothy Dinger
Author(s):  
Emadeldeen Eldele ◽  
Mohamed Ragab ◽  
Zhenghua Chen ◽  
Min Wu ◽  
Chee Keong Kwoh ◽  
...  

Learning decent representations from unlabeled time-series data with temporal dynamics is a very challenging task. In this paper, we propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data. First, the raw time-series data are transformed into two different yet correlated views by using weak and strong augmentations. Second, we propose a novel temporal contrasting module to learn robust temporal representations by designing a tough cross-view prediction task. Last, to further learn discriminative representations, we propose a contextual contrasting module built upon the contexts from the temporal contrasting module. It attempts to maximize the similarity among different contexts of the same sample while minimizing similarity among contexts of different samples. Experiments have been carried out on three real-world time-series datasets. The results manifest that training a linear classifier on top of the features learned by our proposed TS-TCC performs comparably with the supervised training. Additionally, our proposed TS-TCC shows high efficiency in few-labeled data and transfer learning scenarios. The code is publicly available at https://github.com/emadeldeen24/TS-TCC.


2021 ◽  
pp. 108097
Author(s):  
Berk Görgülü ◽  
Mustafa Gökçe Baydoğan

Author(s):  
Karan Aggarwal ◽  
Shafiq Joty ◽  
Luis Fernandez-Luque ◽  
Jaideep Srivastava

Sufficient physical activity and restful sleep play a major role in the prevention and cure of many chronic conditions. Being able to proactively screen and monitor such chronic conditions would be a big step forward for overall health. The rapid increase in the popularity of wearable devices pro-vides a significant new source, making it possible to track the user’s lifestyle real-time. In this paper, we propose a novel unsupervised representation learning technique called activ-ity2vecthat learns and “summarizes” the discrete-valued ac-tivity time-series. It learns the representations with three com-ponents: (i) the co-occurrence and magnitude of the activ-ity levels in a time-segment, (ii) neighboring context of the time-segment, and (iii) promoting subject-invariance with ad-versarial training. We evaluate our method on four disorder prediction tasks using linear classifiers. Empirical evaluation demonstrates that our proposed method scales and performs better than many strong baselines. The adversarial regime helps improve the generalizability of our representations by promoting subject invariant features. We also show that using the representations at the level of a day works the best since human activity is structured in terms of daily routines.


Sign in / Sign up

Export Citation Format

Share Document