View synthesis quality mapping for depth-based super resolution on mixed resolution 3D video

Author(s):  
Michal Joachimiak ◽  
Miska M. Hannuksela ◽  
Moncef Gabbouj
2021 ◽  
Vol 30 ◽  
pp. 1072-1085
Author(s):  
Shao-Ping Lu ◽  
Sen-Mao Li ◽  
Rong Wang ◽  
Gauthier Lafruit ◽  
Ming-Ming Cheng ◽  
...  

Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


2012 ◽  
Author(s):  
Thomas Richter ◽  
Michael Schöberl ◽  
Jürgen Seiler ◽  
Tobias Tröger ◽  
André Kaup

2020 ◽  
Vol 10 (5) ◽  
pp. 1562 ◽  
Author(s):  
Xiaodong Chen ◽  
Haitao Liang ◽  
Huaiyuan Xu ◽  
Siyu Ren ◽  
Huaiyu Cai ◽  
...  

Depth image-based rendering (DIBR) plays an important role in 3D video and free viewpoint video synthesis. However, artifacts might occur in the synthesized view due to viewpoint changes and stereo depth estimation errors. Holes are usually out-of-field regions and disocclusions, and filling them appropriately becomes a challenge. In this paper, a virtual view synthesis approach based on asymmetric bidirectional DIBR is proposed. A depth image preprocessing method is applied to detect and correct unreliable depth values around the foreground edges. For the primary view, all pixels are warped to the virtual view by the modified DIBR method. For the auxiliary view, only the selected regions are warped, which contain the contents that are not visible in the primary view. This approach reduces the computational cost and prevents irrelevant foreground pixels from being warped to the holes. During the merging process, a color correction approach is introduced to make the result appear more natural. In addition, a depth-guided inpainting method is proposed to handle the remaining holes in the merged image. Experimental results show that, compared with bidirectional DIBR, the proposed rendering method can reduce about 37% rendering time and achieve 97% hole reduction. In terms of visual quality and objective evaluation, our approach performs better than the previous methods.


Sign in / Sign up

Export Citation Format

Share Document