Self-Supervised Mutual Learning for Video Representation Learning

Author(s):  
Jinpeng Wang ◽  
Yutong Li ◽  
Jianguo Hu ◽  
Xuebin Yang ◽  
Yanyu Ding
Author(s):  
Zhipeng Wang ◽  
Chunping Hou ◽  
Guanghui Yue ◽  
Qingyuan Yang

Author(s):  
Chenrui Zhang ◽  
Yuxin Peng

Video representation learning is a vital problem for classification task. Recently, a promising unsupervised paradigm termed self-supervised learning has emerged, which explores inherent supervisory signals implied in massive data for feature learning via solving auxiliary tasks. However, existing methods in this regard suffer from two limitations when extended to video classification. First, they focus only on a single task, whereas ignoring complementarity among different task-specific features and thus resulting in suboptimal video representation. Second, high computational and memory cost hinders their application in real-world scenarios. In this paper, we propose a graph-based distillation framework to address these problems: (1) We propose logits graph and representation graph to transfer knowledge from multiple self-supervised tasks, where the former distills classifier-level knowledge by solving a multi-distribution joint matching problem, and the latter distills internal feature knowledge from pairwise ensembled representations with tackling the challenge of heterogeneity among different features; (2) The proposal that adopts a teacher-student framework can reduce the redundancy of knowledge learned from teachers dramatically, leading to a lighter student model that solves classification task more efficiently. Experimental results on 3 video datasets validate that our proposal not only helps learn better video representation but also compress model for faster inference.


Sign in / Sign up

Export Citation Format

Share Document