scholarly journals Fast CU Size Decision Method Based on Just Noticeable Distortion and Deep Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jinchao Zhao ◽  
Yihan Wang ◽  
Qiuwen Zhang

With the development of broadband networks and high-definition displays, people have higher expectations for the quality of video images, which also brings new requirements and challenges to video coding technology. Compared with H.265/High Efficiency Video Coding (HEVC), the latest video coding standard, Versatile Video Coding (VVC), can save 50%-bit rate while maintaining the same subjective quality, but it leads to extremely high encoding complexity. To decrease the complexity, a fast coding unit (CU) size decision method based on Just Noticeable Distortion (JND) and deep learning is proposed in this paper. Specifically, the hybrid JND threshold model is first designed to distinguish smooth, normal, or complex region. Then, if CU belongs to complex area, the Ultra-Spherical SVM (US-SVM) classifiers are trained for forecasting the best splitting mode. Experimental results illustrate that the proposed method can save about 52.35% coding runtime, which can realize a trade-off between the reduction of computational burden and coding efficiency compared with the latest methods.

2019 ◽  
Vol 29 (03) ◽  
pp. 2050046
Author(s):  
Xin Li ◽  
Na Gong

The state-of-the-art high efficiency video coding (HEVC/H.265) adopts the hierarchical quadtree-structured coding unit (CU) to enhance the coding efficiency. However, the computational complexity significantly increases because of the exhaustive rate-distortion (RD) optimization process to obtain the optimal coding tree unit (CTU) partition. In this paper, we propose a fast CU size decision algorithm to reduce the heavy computational burden in the encoding process. In order to achieve this, the CU splitting process is modeled as a three-stage binary classification problem according to the CU size from [Formula: see text], [Formula: see text] to [Formula: see text]. In each CU partition stage, a deep learning approach is applied. Appropriate and efficient features for training the deep learning models are extracted from spatial and pixel domains to eliminate the dependency on video content as well as on encoding configurations. Furthermore, the deep learning framework is built as a third-party library and embedded into the HEVC simulator to speed up the process. The experiment results show the proposed algorithm can achieve significant complexity reduction and it can reduce the encoding time by 49.65%(Low Delay) and 48.81% (Random Access) on average compared with the traditional HEVC encoders with a negligible degradation (2.78% loss in BDBR, 0.145[Formula: see text]dB loss in BDPSNR for Low Delay, and 2.68% loss in BDBR, 0.128[Formula: see text]dB loss in BDPSNR for Random Access) in the coding efficiency.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Jinchao Zhao ◽  
Yihan Wang ◽  
Qiuwen Zhang

With the development of technology, the hardware requirement and expectations of user for visual enjoyment are getting higher and higher. The multitype tree (MTT) architecture is proposed by the Joint Video Experts Team (JVET). Therefore, it is necessary to determine not only coding unit (CU) depth but also its split mode in the H.266/Versatile Video Coding (H.266/VVC). Although H.266/VVC achieves significant coding performance on the basis of H.265/High Efficiency Video Coding (H.265/HEVC), it causes significantly coding complexity and increases coding time, where the most time-consuming part is traversal calculation rate-distortion (RD) of CU. To solve these problems, this paper proposes an adaptive CU split decision method based on deep learning and multifeature fusion. Firstly, we develop a texture classification model based on threshold to recognize complex and homogeneous CU. Secondly, if the complex CUs belong to edge CU, a Convolutional Neural Network (CNN) structure based on multifeature fusion is utilized to classify CU. Otherwise, an adaptive CNN structure is used to classify CUs. Finally, the division of CU is determined by the trained network and the parameters of CU. When the complex CUs are split, the above two CNN schemes can successfully process the training samples and terminate the rate-distortion optimization (RDO) calculation for some CUs. The experimental results indicate that the proposed method reduces the computational complexity and saves 39.39% encoding time, thereby achieving fast encoding in H.266/VVC.


2020 ◽  
Vol 34 (07) ◽  
pp. 11580-11587
Author(s):  
Haojie Liu ◽  
Han Shen ◽  
Lichao Huang ◽  
Ming Lu ◽  
Tong Chen ◽  
...  

Traditional video compression technologies have been developed over decades in pursuit of higher coding efficiency. Efficient temporal information representation plays a key role in video coding. Thus, in this paper, we propose to exploit the temporal correlation using both first-order optical flow and second-order flow prediction. We suggest an one-stage learning approach to encapsulate flow as quantized features from consecutive frames which is then entropy coded with adaptive contexts conditioned on joint spatial-temporal priors to exploit second-order correlations. Joint priors are embedded in autoregressive spatial neighbors, co-located hyper elements and temporal neighbors using ConvLSTM recurrently. We evaluate our approach for the low-delay scenario with High-Efficiency Video Coding (H.265/HEVC), H.264/AVC and another learned video compression method, following the common test settings. Our work offers the state-of-the-art performance, with consistent gains across all popular test sequences.


Entropy ◽  
2019 ◽  
Vol 21 (2) ◽  
pp. 165 ◽  
Author(s):  
Xiantao Jiang ◽  
Tian Song ◽  
Daqi Zhu ◽  
Takafumi Katayama ◽  
Lu Wang

Perceptual video coding (PVC) can provide a lower bitrate with the same visual quality compared with traditional H.265/high efficiency video coding (HEVC). In this work, a novel H.265/HEVC-compliant PVC framework is proposed based on the video saliency model. Firstly, both an effective and efficient spatiotemporal saliency model is used to generate a video saliency map. Secondly, a perceptual coding scheme is developed based on the saliency map. A saliency-based quantization control algorithm is proposed to reduce the bitrate. Finally, the simulation results demonstrate that the proposed perceptual coding scheme shows its superiority in objective and subjective tests, achieving up to a 9.46% bitrate reduction with negligible subjective and objective quality loss. The advantage of the proposed method is the high quality adapted for a high-definition video application.


2019 ◽  
Vol 15 (12) ◽  
pp. 155014771989256
Author(s):  
Hong-rae Lee ◽  
Eun-bin Ahn ◽  
A-young Kim ◽  
Kwang-deok Seo

Recently, as demand for high-quality video and realistic media has increased, High Efficiency Video Coding has been standardized. However, High Efficiency Video Coding requires heavy cost in terms of computational complexity to achieve high coding efficiency, which causes problems in fast coding processing and real-time processing. In particular, High Efficiency Video Coding inter-coding has heavy computational complexity, and the High Efficiency Video Coding inter prediction uses reference pictures to improve coding efficiency. The reference pictures are typically signaled in two independent lists according to the display order, to be used for forward and backward prediction. If an event occurs in the input video, such as a scene change, the inter prediction performs unnecessary computations. Therefore, the reference picture list should be reconfigured to improve the inter prediction performance and reduce computational complexity. To address this problem, this article proposes a method to reduce computational complexity for fast High Efficiency Video Coding encoding using information such as scene changes obtained from the input video through preprocessing. Furthermore, reference picture lists are reconstructed by sorting the reference pictures by similarity to the current coded picture using Angular Second Moment, Contrast, Entropy, and Correlation, which are image texture parameters from the input video. Simulations are used to show that both the encoding time and coding efficiency could be improved simultaneously by applying the proposed algorithms.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1405 ◽  
Author(s):  
Riccardo Peloso ◽  
Maurizio Capra ◽  
Luigi Sole ◽  
Massimo Ruo Roch ◽  
Guido Masera ◽  
...  

In the last years, the need for new efficient video compression methods grown rapidly as frame resolution has increased dramatically. The Joint Collaborative Team on Video Coding (JCT-VC) effort produced in 2013 the H.265/High Efficiency Video Coding (HEVC) standard, which represents the state of the art in video coding standards. Nevertheless, in the last years, new algorithms and techniques to improve coding efficiency have been proposed. One promising approach relies on embedding direction capabilities into the transform stage. Recently, the Steerable Discrete Cosine Transform (SDCT) has been proposed to exploit directional DCT using a basis having different orientation angles. The SDCT leads to a sparser representation, which translates to improved coding efficiency. Preliminary results show that the SDCT can be embedded into the HEVC standard, providing better compression ratios. This paper presents a hardware architecture for the SDCT, which is able to work at a frequency of 188 M Hz , reaching a throughput of 3.00 GSample/s. In particular, this architecture supports 8k UltraHigh Definition (UHD) (7680 × 4320) with a frame rate of 60 Hz , which is one of the best resolutions supported by HEVC.


2020 ◽  
Vol 10 (2) ◽  
pp. 496-501
Author(s):  
Wen Si ◽  
Qian Zhang ◽  
Zhengcheng Shi ◽  
Bin Wang ◽  
Tao Yan ◽  
...  

High Efficiency Video Coding (HEVC) is the next generation video coding standard. In HEVC, 35 intra prediction modes are defined to improve coding efficiency, which result in huge computational complexity, as a large number of prediction modes and a flexible coding unit (CU) structure is adopted in CU coding. To reduce this computational burden, this paper presents a gradient-based candidate list clipping algorithm for Intra mode prediction. Experimental results show that the proposed algorithm can reduce 29.16% total encoding time with just 1.34% BD-rate increase and –0.07 dB decrease of BD-PSNR.


Sign in / Sign up

Export Citation Format

Share Document