scholarly journals Artifact Handling Based on Depth Image for View Synthesis

2019 ◽  
Vol 9 (9) ◽  
pp. 1834 ◽  
Author(s):  
Xiaodong Chen ◽  
Haitao Liang ◽  
Huaiyuan Xu ◽  
Siyu Ren ◽  
Huaiyu Cai ◽  
...  

The depth image based rendering (DIBR) is a popular technology for 3D video and free viewpoint video (FVV) synthesis, by which numerous virtual views can be generated from a single reference view and its depth image. However, some artifacts are produced in the DIBR process and reduce the visual quality of virtual view. Due to the diversity of artifacts, effectively handling them becomes a challenging task. In this paper, an artifact handling method based on depth image is proposed. The reference image and its depth image are extended to fill the holes that belong to the out-of-field regions. A depth image preprocessing method is applied to project the ghosts to their correct place. The 3D warping process is optimized by an adaptive one-to-four method to deal with the cracks and pixel overlapping. For disocclusions, we calculate depth and background terms of the filling priority based on depth information. The search for the best matching patch is performed simultaneously in the reference image and the virtual image. Moreover, adaptive patch size is used in all hole-filling processes. Experimental results demonstrate the effectiveness of the proposed method, which has better performance compared with previous methods in subjective and objective evaluation.

2020 ◽  
Vol 10 (5) ◽  
pp. 1562 ◽  
Author(s):  
Xiaodong Chen ◽  
Haitao Liang ◽  
Huaiyuan Xu ◽  
Siyu Ren ◽  
Huaiyu Cai ◽  
...  

Depth image-based rendering (DIBR) plays an important role in 3D video and free viewpoint video synthesis. However, artifacts might occur in the synthesized view due to viewpoint changes and stereo depth estimation errors. Holes are usually out-of-field regions and disocclusions, and filling them appropriately becomes a challenge. In this paper, a virtual view synthesis approach based on asymmetric bidirectional DIBR is proposed. A depth image preprocessing method is applied to detect and correct unreliable depth values around the foreground edges. For the primary view, all pixels are warped to the virtual view by the modified DIBR method. For the auxiliary view, only the selected regions are warped, which contain the contents that are not visible in the primary view. This approach reduces the computational cost and prevents irrelevant foreground pixels from being warped to the holes. During the merging process, a color correction approach is introduced to make the result appear more natural. In addition, a depth-guided inpainting method is proposed to handle the remaining holes in the merged image. Experimental results show that, compared with bidirectional DIBR, the proposed rendering method can reduce about 37% rendering time and achieve 97% hole reduction. In terms of visual quality and objective evaluation, our approach performs better than the previous methods.


2016 ◽  
Vol 9 (5) ◽  
pp. 145-164 ◽  
Author(s):  
Ran Liu ◽  
Zekun Deng ◽  
Lin Yi ◽  
Zhenwei Huang ◽  
Donghua Cao ◽  
...  

Author(s):  
Mehrdad Panahpour Tehrani ◽  
Tomoyuki Tezuka ◽  
Kazuyoshi Suzuki ◽  
Keita Takahashi ◽  
Toshiaki Fujii

A free-viewpoint image can be synthesized using color and depth maps of reference viewpoints, via depth-image-based rendering (DIBR). In this process, three-dimensional (3D) warping is generally used. A 3D warped image consists of disocclusion holes with missing pixels that correspond to occluded regions in the reference images, and non-disocclusion holes due to limited sampling density of the reference images. The non-disocclusion holes are those among scattered pixels of a same region or object. These holes are larger when the reference viewpoints and the free viewpoint images have a larger physical distance. Filling these holes has a crucial impact on the quality of free-viewpoint image. In this paper, we focus on free-viewpoint image synthesis that is precisely capable of filling the non-disocclusion holes caused by limited sampling density, using superpixel segmentation. In this approach, we proposed two criteria for segmenting depth and color data of each reference viewpoint. By these criteria, we can detect which neighboring pixels should be connected or kept isolated in each references image, before being warped. Polygons enclosed by the connected pixels, i.e. superpixel, are inpainted by k-means interpolation. Our superpixel approach has a high accuracy since we use both color and depth data to detect superpixels at the location of the reference viewpoint. Therefore, once a reference image that consists of superpixels is 3D warped to a virtual viewpoint, the non-disocclusion holes are significantly reduced. Experimental results verify the advantage of our approach and demonstrate high quality of synthesized image when the virtual viewpoint is physically far from the reference viewpoints.


2017 ◽  
Vol 24 (3) ◽  
pp. 329-333 ◽  
Author(s):  
Jea-Hyung Cho ◽  
Wonseok Song ◽  
Hyuk Choi ◽  
Taejeong Kim

Sign in / Sign up

Export Citation Format

Share Document