Deep trajectory representation-based clustering for motion pattern extraction in videos

Author(s):  
Jonathan Boyle ◽  
Tahir Nawaz ◽  
James Ferryman
2019 ◽  
Vol 38 (6) ◽  
pp. 686-701 ◽  
Author(s):  
Hannes Ovrén ◽  
Per-Erik Forssén

This paper revisits the problem of continuous-time structure from motion, and introduces a number of extensions that improve convergence and efficiency. The formulation with a [Formula: see text]-continuous spline for the trajectory naturally incorporates inertial measurements, as derivatives of the sought trajectory. We analyze the behavior of split spline interpolation on [Formula: see text] and on [Formula: see text], and a joint spline on [Formula: see text], and show that the latter implicitly couples the direction of translation and rotation. Such an assumption can make good sense for a camera mounted on a robot arm, but not for hand-held or body-mounted cameras. Our experiments in the Spline Fusion framework show that a split spline on [Formula: see text] is preferable over an [Formula: see text] spline in all tested cases. Finally, we investigate the problem of landmark reprojection on rolling shutter cameras, and show that the tested reprojection methods give similar quality, whereas their computational load varies by a factor of two.


Sign in / Sign up

Export Citation Format

Share Document