View and scale invariant action recognition using multiview shape-flow models

Author(s):  
Pradeep Natarajan ◽  
Ramakant Nevatia
2014 ◽  
Vol 123 ◽  
pp. 41-52 ◽  
Author(s):  
Nazim Ashraf ◽  
Chuan Sun ◽  
Hassan Foroosh

Author(s):  
Yanli Ji ◽  
Feixiang Xu ◽  
Yang Yang ◽  
Ning Xie ◽  
Heng Tao Shen ◽  
...  

2018 ◽  
Vol 119 (2) ◽  
pp. 631-640 ◽  
Author(s):  
Leyla Isik ◽  
Andrea Tacchetti ◽  
Tomaso Poggio

Humans can effortlessly recognize others’ actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition; however, the underlying neural computations remain poorly understood. We use magnetoencephalography decoding and a data set of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discriminates between different actions, and when it does so in a manner that is invariant to changes in 3D viewpoint. We measure the latency difference between invariant and noninvariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. We were unable to detect a difference in decoding latency or temporal profile between invariant and noninvariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time and that both form and motion information are crucial for fast, invariant action recognition. NEW & NOTEWORTHY The human brain can quickly recognize actions despite transformations that change their visual appearance. We use neural timing data to uncover the computations underlying this ability. We find that within 200 ms action can be read out of magnetoencephalography data and that this representation is invariant to changes in viewpoint. We find form and motion are needed for this fast action decoding, suggesting that the brain quickly integrates complex spatiotemporal features to form invariant action representations.


2013 ◽  
Vol 117 (6) ◽  
pp. 587-602 ◽  
Author(s):  
Nazim Ashraf ◽  
Yuping Shen ◽  
Xiaochun Cao ◽  
Hassan Foroosh

Sign in / Sign up

Export Citation Format

Share Document