Spatio-Temporally Consistent Novel View Synthesis Algorithm From Video-Plus-Depth Sequences for Autostereoscopic Displays

2011 ◽  
Vol 57 (2) ◽  
pp. 523-532 ◽  
Author(s):  
Chia-Ming Cheng ◽  
Shu-Jyuan Lin ◽  
Shang-Hong Lai
2013 ◽  
Vol 284-287 ◽  
pp. 3230-3234
Author(s):  
Thomas Schumann ◽  
Herbert Krauß ◽  
Yeong Kang Lai ◽  
Yu Fan Lai

With advances in technology, 3D video technology becomes possible and attractive. However, there are still many pre-recorded 2D videos/images which need to get transferred to 3D. Hence this paper presents a high quality view synthesis algorithm and architecture for 2D-to-3D video conversion. During the process of view synthesis, the monocular depth information together with the intermediate view is synthesized to the left-eye and right-eye view. The proposed view synthesis algorithm consists of two parts: 3D image warping and inpainting (hole filling). 3D image warping transforms a 2D camera image plane to a 3D coordinate plane. However the integer grid points of the reference are warped to irregularly spaced points in the virtual view, resulting in occlusion problems. Thus inpainting is needed to fix the virtual images. The proposed algorithm shows an improved PSNR gain of 0.2~1.5dB. We adopt hardware/software co-design to accomplish the proposed view synthesis algorithm. For this we implemented the image inpainting on a FPGA device and the remaining algorithm in software.


Author(s):  
Yu-Hsiang Huang ◽  
Tzu-Kuei Huang ◽  
Yan-Hsiang Huang ◽  
Wei-Chao Chen ◽  
Yung-Yu Chuang

3D Research ◽  
2012 ◽  
Vol 3 (1) ◽  
Author(s):  
Lam C. Tran ◽  
Can Bal ◽  
Christopher J. Pal ◽  
Truong Q. Nguyen

Sign in / Sign up

Export Citation Format

Share Document