Configurable Complexity-Bounded Motion Estimation for Real-Time Video Encoding

Author(s):  
Zhi Yang ◽  
Jiajun Bu ◽  
Chun Chen ◽  
Linjian Mo
2011 ◽  
Vol 383-390 ◽  
pp. 5028-5033
Author(s):  
Xue Mei Xu ◽  
Qin Mo ◽  
Lan Ni ◽  
Qiao Yun Guo ◽  
An Li

In the video encoding system, motion estimation plays an important role at the front-end of encoder, which can eliminate inter redundancy efficiently and improve encoding efficiency. However, traditional motion estimation algorithm can’t be used in real-time application like video monitoring due to its computational complexity. In order to improve real-time efficiency, an improved motion estimation algorithm is proposed in this paper. The essential ideas consist of early termination rules, prediction of initial search point, and determination of motion type. Furthermore, our algorithm adopts different search patterns for certain motion activity. Experimental result shows that the improved algorithm reduces the computation time significantly while maintaining the image quality, and satisfies real time requirement in monitoring system.


Author(s):  
T. Yoshino ◽  
S. Foster ◽  
Jasopin Lee ◽  
L. Cheng ◽  
V. Ponukumati ◽  
...  

1996 ◽  
Vol 31 (11) ◽  
pp. 1733-1741 ◽  
Author(s):  
K. Suguri ◽  
T. Minami ◽  
H. Matsuda ◽  
R. Kusaba ◽  
T. Kondo ◽  
...  

Author(s):  
EL Ansari Abdessamad ◽  
Nejmeddine Bahri ◽  
Anass Mansouri ◽  
Nouri Masmoud ◽  
Ahaitouf Ali

<span lang="EN-US">In this paper, we propose a new parallel hardware architecture for the mode decision algorithm, that it is based on the Sum Absolute of the Difference (SAD) for compute the motion estimation, which is the most critical algorithm in the recent video encoding standard HEVC. In fact, this standard introduced new large variable block sizes for the motion estimation algorithm and therefore the SAD requires a more reduced execution time in order to achieve the real time processing even for the ultra-high resolution sequences. The proposed accelerator executes the SAD algorithm in a parallel way for all sub-block prediction units (PUs) and coding unit (CU) whatever their sizes, which turns in a huge improvements in the performances, given that all the block sizes, PUs in each CU, are supported and processed in the same time. The Xilinx Artix-7 (Zynq-7000) FPGA is used for the prototyping and the synthesis of the proposed accelerator. The mode decision for motion estimation scheme is implemented with 32K LUTs, 50K registers and 108Kb BRAMs. The implementation results show that our hardware architecture can achieve 30 frames per second of the 4K (3840 × 2160) resolutions in real time processing at 115.15MHz.</span>


Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


Sign in / Sign up

Export Citation Format

Share Document