affine motion
Recently Published Documents


TOTAL DOCUMENTS

135
(FIVE YEARS 19)

H-INDEX

18
(FIVE YEARS 3)

Author(s):  
Tianliang Fu ◽  
Kai Zhang ◽  
Li Zhang ◽  
Shanshe Wang ◽  
Siwei Ma

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1243
Author(s):  
Seongwon Jung ◽  
Dongsan Jun

Versatile Video Coding (VVC) is the most recent video coding standard developed by Joint Video Experts Team (JVET) that can achieve a bit-rate reduction of 50% with perceptually similar quality compared to the previous method, namely High Efficiency Video Coding (HEVC). Although VVC can support the significant coding performance, it leads to the tremendous computational complexity of VVC encoder. In particular, VVC has newly adopted an affine motion estimation (AME) method to overcome the limitations of the translational motion model at the expense of higher encoding complexity. In this paper, we proposed a context-based inter mode decision method for fast affine prediction that determines whether the AME is performed or not in the process of rate-distortion (RD) optimization for optimal CU-mode decision. Experimental results showed that the proposed method significantly reduced the encoding complexity of AME up to 33% with unnoticeable coding loss compared to the VVC Test Model (VTM).


Author(s):  
Dengchao Jin ◽  
Jianjun Lei ◽  
Bo Peng ◽  
Wanqing Li ◽  
Nam Ling ◽  
...  

Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1143
Author(s):  
Weizheng Ren ◽  
Wei He ◽  
Yansong Cui

As a newly proposed video coding standard, Versatile Video Coding (VVC) has adopted some revolutionary techniques compared to High Efficiency Video Coding (HEVC). The multiple-mode affine motion compensation (MM-AMC) adopted by VVC saves approximately 15%-25% Bjøntegaard Delta Bitrate (BD-BR), with an inevitable increase of encoding time. This paper gives an overview of both the 4-parameter affine motion model and the 6-parameter affine motion model, analyzes their performances, and proposes improved algorithms according to the symmetry of iterative gradient descent for fast affine motion estimation. Finally, the proposed algorithms and symmetric MM-AMC flame of VTM-7.0 are compared. The results show that the proposed algorithms save 6.65% total encoding time on average, which saves approximately 30% encoding time of affine motion compensation.


2020 ◽  
Vol 10 (2) ◽  
pp. 729 ◽  
Author(s):  
Antoine Chauvet ◽  
Yoshihiro Sugaya ◽  
Tomo Miyazaki ◽  
Shinichiro Omachi

This study proposes a lightweight solution to estimate affine parameters in affine motion compensation. Most of the current approaches start with an initial approximation based on the standard motion estimation, which only estimates the translation parameters. From there, iterative methods are used to find the best parameters, but they require a significant amount of time. The proposed method aims to speed up the process in two ways, first, skip evaluating affine prediction when it is likely to bring no encoding efficiency benefit, and second, by estimating better initial values for the iteration process. We use the optical flow between the reference picture and the current picture to estimate quickly the best encoding mode and get a better initial estimation. We achieve a reduction in encoding time over the reference of half when compared to the state of the art, with a loss in efficiency below 1%.


2020 ◽  
Vol 29 ◽  
pp. 7359-7374
Author(s):  
Holger Meuel ◽  
Jorn Ostermann
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document