Virtual View and Video Synthesis Without Camera Calibration or Depth Map

Author(s):  
Hai Xu ◽  
Yi Wan
2011 ◽  
Vol 33 (11) ◽  
pp. 2541-2546 ◽  
Author(s):  
Qiu-wen Zhang ◽  
Ping An ◽  
Yan Zhang ◽  
Zhao-yang Zhang

2018 ◽  
Vol 25 (3) ◽  
pp. 417-421 ◽  
Author(s):  
Ziqi Zheng ◽  
Junyan Huo ◽  
Bingbing Li ◽  
Hui Yuan

2011 ◽  
Vol 15 ◽  
pp. 1115-1119
Author(s):  
Linwei Zhu ◽  
Mei Yu ◽  
Gangyi Jiang ◽  
Xiangying Mao ◽  
Songyin Fu ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 906
Author(s):  
Hui-Yu Huang ◽  
Shao-Yu Huang

The recent emergence of three-dimensional (3D) movies and 3D television (TV) indicates an increasing interest in 3D content. Stereoscopic displays have enabled visual experiences to be enhanced, allowing the world to be viewed in 3D. Virtual view synthesis is the key technology to present 3D content, and depth image-based rendering (DIBR) is a classic virtual view synthesis method. With a texture image and its corresponding depth map, a virtual view can be generated using the DIBR technique. The depth and camera parameters are used to project the entire pixel in the image to the 3D world coordinate system. The results in the world coordinates are then reprojected into the virtual view, based on 3D warping. However, these projections will result in cracks (holes). Hence, we herein propose a new method of DIBR for free viewpoint videos to solve the hole problem due to these projection processes. First, the depth map is preprocessed to reduce the number of holes, which does not produce large-scale geometric distortions; subsequently, improved 3D warping projection is performed collectively to create the virtual view. A median filter is used to filter the hole regions in the virtual view, followed by 3D inverse warping blending to remove the holes. Next, brightness adjustment and adaptive image blending are performed. Finally, the synthesized virtual view is obtained using the inpainting method. Experimental results verify that our proposed method can produce a pleasant visibility of the synthetized virtual view, maintain a high peak signal-to-noise ratio (PSNR) value, and efficiently decrease execution time compared with state-of-the-art methods.


2019 ◽  
Vol 31 (8) ◽  
pp. 1278
Author(s):  
Haitao Liang ◽  
Xiaodong Chen ◽  
Huaiyuan Xu ◽  
Siyu Ren ◽  
Yi Wang ◽  
...  

Author(s):  
Luis F R. Lucas ◽  
Nuno M. M. Rodrigues ◽  
Carla L. Pagliari ◽  
Eduardo A. B. da Silva ◽  
Sergio M. M. de Farla

Sign in / Sign up

Export Citation Format

Share Document