Hand Detection and Tracking in Videos for Fine-Grained Action Recognition

Author(s):  
Nga H. Do ◽  
Keiji Yanai
Author(s):  
Joanna Isabelle Olszewska ◽  
Cleveland Rouge ◽  
Sohil Shaikh

2021 ◽  
pp. 620-631
Author(s):  
Xiang Li ◽  
Shenglan Liu ◽  
Yunheng Li ◽  
Hao Liu ◽  
Jinjing Zhao ◽  
...  

Author(s):  
Yang Zhou ◽  
Bingbing Ni ◽  
Shuicheng Yan ◽  
Pierre Moulin ◽  
Qi Tian

Author(s):  
Hao Liang ◽  
Yong Zhao ◽  
Jiangyue Wei ◽  
Dongbing Quan ◽  
Ruzhong Cheng ◽  
...  

Author(s):  
Dima Damen ◽  
Hazel Doughty ◽  
Giovanni Maria Farinella ◽  
Antonino Furnari ◽  
Evangelos Kazakos ◽  
...  

AbstractThis paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (Damen in Scaling egocentric vision: ECCV, 2018), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection enables new challenges such as action detection and evaluating the “test of time”—i.e. whether models trained on data collected in 2018 can generalise to new footage collected two years later. The dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 103629-103638 ◽  
Author(s):  
Jian Xiong ◽  
Liguo Lu ◽  
Hengbing Wang ◽  
Jie Yang ◽  
Guan Gui

Sign in / Sign up

Export Citation Format

Share Document