Affine Motion Prediction Based on Translational Motion Vectors

2007 ◽  
Vol 17 (10) ◽  
pp. 1388-1394 ◽  
Author(s):  
R.C. Kordasiewicz ◽  
M.D. Gallant ◽  
S. Shirani
Author(s):  
J. Karlsson

In this paper the authors present an approach to provide efficient low-complexity encoding for the block-based video coding scheme. The authors present a method based on removing the most time-consuming task, that is motion estimation, from the encoder. Instead the decoder will perform motion prediction based on the available decoded frame and send the predicted motion vectors to the encoder. The results presented are based on a modified H.264 implementation. The results show that this approach can provide rather good coding efficiency even for relatively high network delays.


2007 ◽  
Vol 9 (7) ◽  
pp. 1346-1356 ◽  
Author(s):  
R.C. Kordasiewicz ◽  
M.D. Gallant ◽  
S. Shirani
Keyword(s):  

Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 993 ◽  
Author(s):  
Young-Ju Choi ◽  
Dong-San Jun ◽  
Won-Sik Cheong ◽  
Byung-Gyu Kim

The fundamental motion model of the conventional block-based motion compensation in High Efficiency Video Coding (HEVC) is a translational motion model. However, in the real world, the motion of an object exists in the form of combining many kinds of motions. In Versatile Video Coding (VVC), a block-based 4-parameter and 6-parameter affine motion compensation (AMC) is being applied. In natural videos, in the majority of cases, a rigid object moves without any regularity rather than maintains the shape or transform with a certain rate. For this reason, the AMC still has a limit to compute complex motions. Therefore, more flexible motion model is desired for new video coding tool. In this paper, we design a perspective affine motion compensation (PAMC) method which can cope with more complex motions such as shear and shape distortion. The proposed PAMC utilizes perspective and affine motion model. The perspective motion model-based method uses four control point motion vectors (CPMVs) to give degree of freedom to all four corner vertices. Besides, the proposed algorithm is integrated into the AMC structure so that the existing affine mode and the proposed perspective mode can be executed adaptively. Because the block with the perspective motion model is a rectangle without specific feature, the proposed PAMC shows effective encoding performance for the test sequence containing irregular object distortions or dynamic rapid motions in particular. Our proposed algorithm is implemented on VTM 2.0. The experimental results show that the BD-rate reduction of the proposed technique can be achieved up to 0.45% and 0.30% on Y component for random access (RA) and low delay P (LDP) configurations, respectively.


2020 ◽  
Author(s):  
Belle Liu ◽  
Arthur Hong ◽  
Fred Rieke ◽  
Michael B. Manookin

ABSTRACTSurvival in the natural environment often relies on an animal’s ability to quickly and accurately predict the trajectories of moving objects. Motion prediction is primarily understood in the context of translational motion, but the environment contains other types of behaviorally salient motion, such as that produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these motion types remain unclear. Here, we address these questions in the macaque monkey retina. We report that four of the parallel output pathways in the primate retina encode predictive information about the future trajectory of moving objects. Predictive encoding occurs both for translational motion and for higher-order motion patterns found in natural vision. Further, predictive encoding of these motion types is nearly optimal with transmitted information approaching the theoretical limit imposed by the stimulus itself. These findings argue that natural selection has emphasized encoding of information that is relevant for anticipating future properties of the environment.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1243
Author(s):  
Seongwon Jung ◽  
Dongsan Jun

Versatile Video Coding (VVC) is the most recent video coding standard developed by Joint Video Experts Team (JVET) that can achieve a bit-rate reduction of 50% with perceptually similar quality compared to the previous method, namely High Efficiency Video Coding (HEVC). Although VVC can support the significant coding performance, it leads to the tremendous computational complexity of VVC encoder. In particular, VVC has newly adopted an affine motion estimation (AME) method to overcome the limitations of the translational motion model at the expense of higher encoding complexity. In this paper, we proposed a context-based inter mode decision method for fast affine prediction that determines whether the AME is performed or not in the process of rate-distortion (RD) optimization for optimal CU-mode decision. Experimental results showed that the proposed method significantly reduced the encoding complexity of AME up to 33% with unnoticeable coding loss compared to the VVC Test Model (VTM).


Sign in / Sign up

Export Citation Format

Share Document