scholarly journals Complexity Analysis of New Future Video Coding (FVC) Standard Technology

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Soulef Bouaafia ◽  
Randa Khemiri ◽  
Seifeddine Messaoud ◽  
Fatma Elzahra Sayadi

Future Video Coding (FVC) is a modern standard in the field of video coding that offers much higher compression efficiency than the HEVC standard. FVC was developed by the Joint Video Exploration Team (JVET), formed through collaboration between the ISO/IEC MPEG and ITU-T VCEG. New tools emerging with the FVC bring in super resolution implementation schemes that are being recommended for Ultra-High-Definition (UHD) video coding in both SDR and HDR images. However, a new flexible block structure is adopted in the FVC standard, which is named quadtree plus binary tree (QTBT) in order to enhance compression efficiency. In this paper, we provide a fast FVC algorithm to achieve better performance and to reduce encoding complexity. First, we evaluate the FVC profiles under All Intra, Low-Delay P, and Random Access to determine which coding components consume the most time. Second, a fast FVC mode decision is proposed to reduce encoding computational complexity. Then, a comparison between three configurations, namely, Random Access, Low-Delay B, and Low-Delay P, is proposed, in terms of Bitrate, PSNR, and encoding time. Compared to previous works, the experimental results prove that the time saving reaches 13% with a decrease in the Bitrate of about 0.6% and in the PSNR of 0.01 to 0.2 dB.

Author(s):  
MyungJun Kim ◽  
Yung-Lyul Lee

High Efficiency Video Coding (HEVC) uses an 8-point filter and a 7-point filter, which are based on the discrete cosine transform (DCT), for the 1/2-pixel and 1/4-pixel interpolations, respectively. In this paper, discrete sine transform (DST)-based interpolation filters (IF) are proposed. The first proposed DST-based IFs (DST-IFs) use 8-point and 7-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively. The final proposed DST-IFs use 12-point and 11-point filters for the 1/2-pixel and 1/4-pixel interpolations, respectively. These DST-IF methods are proposed to improve the motion-compensated prediction in HEVC. The 8-point and 7-point DST-IF methods showed average BD-rate reductions of 0.7% and 0.3% in the random access (RA) and low delay B (LDB) configurations, respectively. The 12-point and 11-point DST-IF methods showed average BD-rate reductions of 1.4% and 1.2% in the RA and LDB configurations for the Luma component, respectively.


2019 ◽  
Vol 29 (03) ◽  
pp. 2050046
Author(s):  
Xin Li ◽  
Na Gong

The state-of-the-art high efficiency video coding (HEVC/H.265) adopts the hierarchical quadtree-structured coding unit (CU) to enhance the coding efficiency. However, the computational complexity significantly increases because of the exhaustive rate-distortion (RD) optimization process to obtain the optimal coding tree unit (CTU) partition. In this paper, we propose a fast CU size decision algorithm to reduce the heavy computational burden in the encoding process. In order to achieve this, the CU splitting process is modeled as a three-stage binary classification problem according to the CU size from [Formula: see text], [Formula: see text] to [Formula: see text]. In each CU partition stage, a deep learning approach is applied. Appropriate and efficient features for training the deep learning models are extracted from spatial and pixel domains to eliminate the dependency on video content as well as on encoding configurations. Furthermore, the deep learning framework is built as a third-party library and embedded into the HEVC simulator to speed up the process. The experiment results show the proposed algorithm can achieve significant complexity reduction and it can reduce the encoding time by 49.65%(Low Delay) and 48.81% (Random Access) on average compared with the traditional HEVC encoders with a negligible degradation (2.78% loss in BDBR, 0.145[Formula: see text]dB loss in BDPSNR for Low Delay, and 2.68% loss in BDBR, 0.128[Formula: see text]dB loss in BDPSNR for Random Access) in the coding efficiency.


2013 ◽  
Vol 798-799 ◽  
pp. 798-802
Author(s):  
Yu He ◽  
Chun Di Xiu

3D-SPIHT algorithm extends 2D-DWT to 3D-DWT and encodes the subsequent coefficients, which is a research hotspot in very low bit rate coding and data compression. This paper reorganizes the 3D-SPIHT coefficients with block structure, and puts forward the 3D-BSPIHT algorithm which can represent an insignificant block which contains four wavelet coefficients with one bit in order to improve the compression efficiency. This algorithm increases average PSNR by 0.2dB at most in comparison with 3D-SPIHT algorithm and decreases the computational complexity especially at low bit rate.


Author(s):  
Tung Nguyen ◽  
Detlev Marpe

AOM Video 1 (AV1) and Versatile Video Coding (VVC) are the outcome of two recent independent video coding technology developments. Although VVC is the successor of High Efficiency Video Coding (HEVC) in the lineage of international video coding standards jointly developed by ITU-T and ISO/IEC within an open and public standardization process, AV1 is a video coding scheme that was developed by the industry consortium Alliance for Open Media (AOM) and that has its technological roots in Google's proprietary VP9 codec. This paper presents a compression efficiency evaluation for the AV1, VVC, and HEVC video coding schemes in a typical video compression application requiring random access. The latter is an important property, without which essential functionalities in digital video broadcasting or streaming could not be provided. For the evaluation, we employed a controlled experimental environment that basically follows the guidelines specified in the Common Test Conditions of the Joint Video Experts Team. As representatives of the corresponding video coding schemes, we selected their freely available reference software implementations. Depending on the application-specific frequency of random access points, the experimental results show averaged bit-rate savings of about 10–15% for AV1 and 36–37% for the VVC reference encoder implementation (VTM), both relative to the HEVC reference encoder implementation (HM) and by using a test set of video sequences with different characteristics regarding content and resolution. A direct comparison between VTM and AV1 reveals averaged bit-rate savings of about 25–29% for VTM, while the averaged encoding and decoding run times of VTM relative to those of AV1 are around 300% and 270%, respectively.


Sign in / Sign up

Export Citation Format

Share Document