scholarly journals View synthesis distortion model optimization for bit allocation in three-dimensional video coding

2011 ◽  
Vol 50 (12) ◽  
pp. 120502 ◽  
Author(s):  
Feng Shao
2009 ◽  
Vol 24 (8) ◽  
pp. 666-681 ◽  
Author(s):  
Yanwei Liu ◽  
Qingming Huang ◽  
Siwei Ma ◽  
Debin Zhao ◽  
Wen Gao

Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


Sign in / Sign up

Export Citation Format

Share Document