Spatial-temporal dual-actor CNN for human interaction prediction in video

2020 ◽  
Vol 79 (27-28) ◽  
pp. 20019-20038
Author(s):  
Mahlagha Afrasiabi ◽  
Hassan Khotanlou ◽  
Theo Gevers
Author(s):  
Qiuhong Ke ◽  
Mohammed Bennamoun ◽  
Senjian An ◽  
Farid Boussaid ◽  
Ferdous Sohel

2019 ◽  
Vol 36 (6) ◽  
pp. 1127-1139 ◽  
Author(s):  
Mahlagha Afrasiabi ◽  
Hassan khotanlou ◽  
Muharram Mansoorizadeh

Author(s):  
Yichao Yan ◽  
Bingbing Ni ◽  
Xiaokang Yang

Predicting human interaction is challenging as the on-going activity has to be inferred based on a partially observed video. Essentially, a good algorithm should effectively model the mutual influence between the two interacting subjects. Also, only a small region in the scene is discriminative for identifying the on-going interaction. In this work, we propose a relative attention model to explicitly address these difficulties. Built on a tri-coupled deep recurrent structure representing both interacting subjects and global interaction status, the proposed network collects spatio-temporal information from each subject, rectified with global interaction information, yielding effective interaction representation. Moreover, the proposed network also unifies an attention module to assign higher importance to the regions which are relevant to the on-going action. Extensive experiments have been conducted on two public datasets, and the results demonstrate that the proposed relative attention network successfully predicts informative regions between interacting subjects, which in turn yields superior human interaction prediction accuracy.


2018 ◽  
Vol 20 (7) ◽  
pp. 1712-1723 ◽  
Author(s):  
Qiuhong Ke ◽  
Mohammed Bennamoun ◽  
Senjian An ◽  
Ferdous Sohel ◽  
Farid Boussaid

1974 ◽  
Vol 19 (7) ◽  
pp. 539-540
Author(s):  
NEWTON MARGULIES
Keyword(s):  

1975 ◽  
Vol 20 (7) ◽  
pp. 594-595
Author(s):  
ROBERT D. LANGSTON

Sign in / Sign up

Export Citation Format

Share Document