scholarly journals Low-Complexity Texture Video Coding Based on Motion Homogeneity for 3D-HEVC

2019 ◽  
Vol 2019 ◽  
pp. 1-13
Author(s):  
Qiuwen Zhang ◽  
Shuaichao Wei ◽  
Rijian Su

Three-dimensional extension of the high efficiency video coding (3D-HEVC) is an emerging international video compression standard for multiview video system applications. Similar to HEVC, a computationally expensive mode decision is performed using all depth levels and prediction modes to select the least rate-distortion (RD) cost for each coding unit (CU). In addition, new tools and intercomponent prediction techniques have been introduced to 3D-HEVC for improving the compression efficiency of the multiview texture videos. These techniques, despite achieving the highest texture video coding efficiency, involve extremely high-complex procedures, thus limiting 3D-HEVC encoders in practical applications. In this paper, a fast texture video coding method based on motion homogeneity is proposed to reduce 3D-HEVC computational complexity. Because the multiview texture videos instantly represent the same scene at the same time (considering that the optimal CU depth level and prediction modes are highly multiview content dependent), it is not efficient to use all depth levels and prediction modes in 3D-HEVC. The motion homogeneity model of a CU is first studied according to the motion vectors and prediction modes from the corresponding CUs. Based on this model, we present three efficient texture video coding approaches, such as the fast depth level range determination, early SKIP/Merge mode decision, and adaptive motion search range adjustment. Experimental results demonstrate that the proposed overall method can save 56.6% encoding time with only trivial coding efficiency degradation.

2020 ◽  
Vol 34 (07) ◽  
pp. 11580-11587
Author(s):  
Haojie Liu ◽  
Han Shen ◽  
Lichao Huang ◽  
Ming Lu ◽  
Tong Chen ◽  
...  

Traditional video compression technologies have been developed over decades in pursuit of higher coding efficiency. Efficient temporal information representation plays a key role in video coding. Thus, in this paper, we propose to exploit the temporal correlation using both first-order optical flow and second-order flow prediction. We suggest an one-stage learning approach to encapsulate flow as quantized features from consecutive frames which is then entropy coded with adaptive contexts conditioned on joint spatial-temporal priors to exploit second-order correlations. Joint priors are embedded in autoregressive spatial neighbors, co-located hyper elements and temporal neighbors using ConvLSTM recurrently. We evaluate our approach for the low-delay scenario with High-Efficiency Video Coding (H.265/HEVC), H.264/AVC and another learned video compression method, following the common test settings. Our work offers the state-of-the-art performance, with consistent gains across all popular test sequences.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1405 ◽  
Author(s):  
Riccardo Peloso ◽  
Maurizio Capra ◽  
Luigi Sole ◽  
Massimo Ruo Roch ◽  
Guido Masera ◽  
...  

In the last years, the need for new efficient video compression methods grown rapidly as frame resolution has increased dramatically. The Joint Collaborative Team on Video Coding (JCT-VC) effort produced in 2013 the H.265/High Efficiency Video Coding (HEVC) standard, which represents the state of the art in video coding standards. Nevertheless, in the last years, new algorithms and techniques to improve coding efficiency have been proposed. One promising approach relies on embedding direction capabilities into the transform stage. Recently, the Steerable Discrete Cosine Transform (SDCT) has been proposed to exploit directional DCT using a basis having different orientation angles. The SDCT leads to a sparser representation, which translates to improved coding efficiency. Preliminary results show that the SDCT can be embedded into the HEVC standard, providing better compression ratios. This paper presents a hardware architecture for the SDCT, which is able to work at a frequency of 188 M Hz , reaching a throughput of 3.00 GSample/s. In particular, this architecture supports 8k UltraHigh Definition (UHD) (7680 × 4320) with a frame rate of 60 Hz , which is one of the best resolutions supported by HEVC.


Author(s):  
Umesh Kaware ◽  
Sanjay Gulhane

The emerging High Efficiency Video Coding (HEVC) standard is a new improved next generation video coding standard. HEVC aims to provide improved compression performance as compared to all other video coding standards. To improve the coding efficiency a number of new techniques have been used. The higher compression efficiency is obtained at the cost of an increase in the computational load. In HEVC 35 modes are provided for intra prediction to improve the compression efficiency. The best mode is selected by Rate Distortion Optimization (RDO) process. It achieves significant improvement in coding efficiency compared with previous standards. However, this causes high encoding complexity. This paper discuss the various fast mode decision algorithms for intra prediction in HEVC.


2018 ◽  
Vol 7 (2.4) ◽  
pp. 93
Author(s):  
Parmeshwar Kokare ◽  
Dr MasoodhuBanu. N.M

High efficiency video coding (HEVC) is the latest video compression standard. The coding efficiency of HEVC is 50% more than the preceding standard Advanced video coding (AVC). HEVC has gained this by introducing many advanced techniques such as adaptive block partitioning system known as quadtree, tiles for parallelization, improved entropy coding called Context-Adaptive Binary Arithmetic Coding (CABAC), 35 intra prediction modes (IPMs), etc. all these techniques have increased the complexity of encoding process due to which real time application of HEVC for video transfer is not yet convenient. The main objective of this paper is to provide a review of the recent developments in HEVC, particularly focusing on using region of interest (ROI) for reducing the encoding process time. Summaries of the different approaches to identify the ROI are discussed and a new method is explained. 


Author(s):  
Mohammad Barr

Background: High-Efficiency Video Coding (HEVC) is a recent video compression standard. It provides better compression performance compared to its predecessor, H.264/AVC. However, the computational complexity of the HEVC encoder is much higher than that of H.264/AVC encoder. This makes HEVC less attractive to be used in real-time applications and in devices with limited resources (e.g., low memory, low processing power, etc.). The increased computational complexity of HEVC is partly due to its use of a variable size Transform Unit (TU) selection algorithm which successively performs transform operations using transform units of different sizes before selecting the optimal transform unit size. In this paper, a fast transform unit size selection method is proposed to reduce the computational complexity of an HEVC encoder. Methods: Bayesian decision theory is used to predict the size of the TU during encoding. This is done by exploiting the TU size decisions at a previous temporal level and by modeling the relationship between the TU size and the Rate-Distortion (RD) cost values. Results: Simulation results show that the proposed method achieves a reduction of the encoding time of the latest HEVC encoder by 16.21% on average without incurring any noticeable compromise on its compression efficiency. The algorithm also reduces the number of transform operations by 44.98% on average. Conclusion: In this paper, a novel fast TU size selection scheme for HEVC is proposed. The proposed technique outperforms both the latest HEVC reference software, HM 16.0, as well as other state-of-the-art techniques in terms of time-complexity. The compression performance of the proposed technique is comparable to that of HM 16.0.


2020 ◽  
pp. short57-1-short57-8
Author(s):  
Ban Doan ◽  
Andrey Tropchenko

In order to achieve greater coding efficiency compared with the previous video coding standards, various advanced coding techniques are used in the High Efficiency Video Coding (HEVC) standard, such as a flexible partition and a large number of intra prediction modes. However, these techniques lead to much greater complexity that restricts HEVC from realtime applications. To solve this problem, a fast intra mode decision algorithm is proposed in this paper that uses the block’s textural properties to determine the partition depth range and decide whether to split or skip smaller sizes of the coding unit. Besides that, the number of candidate modes for the rough mode decision process is also reduced depending on the block’s property. Experimental results for the recommended test sequences by the JCT-VC show that the proposed algorithm can save an average of 44% encoder time with a slight loss in performance compared to the reference software HM-16.20.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Qiuwen Zhang ◽  
Nana Li ◽  
Qinggang Wu

The emerging international standard of high efficiency video coding based 3D video coding (3D-HEVC) is a successor to multiview video coding (MVC). In 3D-HEVC depth intracoding, depth modeling mode (DMM) and high efficiency video coding (HEVC) intraprediction mode are both employed to select the best coding mode for each coding unit (CU). This technique achieves the highest possible coding efficiency, but it results in extremely large encoding time which obstructs the 3D-HEVC from practical application. In this paper, a fast mode decision algorithm based on the correlation between texture video and depth map is proposed to reduce 3D-HEVC depth intracoding computational complexity. Since the texture video and its associated depth map represent the same scene, there is a high correlation among the prediction mode from texture video and depth map. Therefore, we can skip some specific depth intraprediction modes rarely used in related texture CU. Experimental results show that the proposed algorithm can significantly reduce computational complexity of 3D-HEVC depth intracoding while maintaining coding efficiency.


Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1243
Author(s):  
Seongwon Jung ◽  
Dongsan Jun

Versatile Video Coding (VVC) is the most recent video coding standard developed by Joint Video Experts Team (JVET) that can achieve a bit-rate reduction of 50% with perceptually similar quality compared to the previous method, namely High Efficiency Video Coding (HEVC). Although VVC can support the significant coding performance, it leads to the tremendous computational complexity of VVC encoder. In particular, VVC has newly adopted an affine motion estimation (AME) method to overcome the limitations of the translational motion model at the expense of higher encoding complexity. In this paper, we proposed a context-based inter mode decision method for fast affine prediction that determines whether the AME is performed or not in the process of rate-distortion (RD) optimization for optimal CU-mode decision. Experimental results showed that the proposed method significantly reduced the encoding complexity of AME up to 33% with unnoticeable coding loss compared to the VVC Test Model (VTM).


Sign in / Sign up

Export Citation Format

Share Document