Spatiotemporal model-based optic flow estimation

Author(s):  
N. Vasconcelos ◽  
A. Lippman
2005 ◽  
Vol 16 (4) ◽  
pp. 341-356 ◽  
Author(s):  
S. Kalkan ◽  
D. Calow ◽  
F. Wörgötter ◽  
M. Lappe ◽  
N. Krüger

Author(s):  
Bruno P. Santos ◽  
Paulo H. L. Rettore ◽  
Heitor S. Ramos ◽  
Luiz F. M. Vieira ◽  
Antonio A. F. Loureiro

Author(s):  
L. Alvarez ◽  
C. A. Castaño ◽  
M. García ◽  
K. Krissian ◽  
L. Mazorra ◽  
...  

2014 ◽  
Vol 513-517 ◽  
pp. 3822-3829
Author(s):  
Wei Hua Hu ◽  
Liang Gu

How to achieve a meaningful video representation is an important problem in various research communities. Automatically segmenting non-specific objects is an open problem. To deal with the error segmentation problem of the existing video algorithms under dynamic scenes, we propose a dynamic spatiotemporal saliency model based on the quaternion wavelet transform for video segmentation in this paper, which has the capability of segmenting the salient objects from moving background automatically. The model is a dynamic combination of the temporal attention model and static salient model. In temporal attention model, motion contrast information can be computed from the phase disparity between two consecutive frames. The phase is extracted from quaternionic pyramid. In static salient model, the spatial attention information is computed by an inverse quaternion wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function has been optimized to better replicate psychophysical data on color appearance. We combine the two kinds of attention information to get the preliminary results and use Grabuct algorithm for the optimization of the result finally. The segmentation and comparison experimental results demonstrate the validity of proposed algorithm.


2016 ◽  
Vol 54 ◽  
pp. 64-74 ◽  
Author(s):  
Alejandro Alcaine ◽  
Natasja M.S. de Groot ◽  
Pablo Laguna ◽  
Juan Pablo Martínez ◽  
Richard P.M. Houben

Sign in / Sign up

Export Citation Format

Share Document