scholarly journals Revisiting Spatio-Angular Trade-off in Light Field Cameras and Extended Applications in Super-Resolution

Author(s):  
Hao Zhu ◽  
Mantang Guo ◽  
Hongdong Li ◽  
Qing Wang ◽  
Antonio Robles-Kelly
2018 ◽  
Vol 15 (1) ◽  
pp. 172988141774844 ◽  
Author(s):  
Mandan Zhao ◽  
Gaochang Wu ◽  
Yebin Liu ◽  
Xiangyang Hao

With the development of consumer light field cameras, the light field imaging has become an extensively used method for capturing the three-dimensional appearance of a scene. The depth estimation often requires a dense sampled light field in the angular domain or a high resolution in the spatial domain. However, there is an inherent trade-off between the angular and spatial resolutions of the light field. Recently, some studies for super-resolving the trade-off light field have been introduced. Rather than the conventional approaches that optimize the depth maps, these approaches focus on maximizing the quality of the super-resolved light field. In this article, we investigate how the depth estimation can benefit from these super-resolution methods. Specifically, we compare the qualities of the estimated depth using (a) the original sparse sampled light fields and the reconstructed dense sampled light fields, and (b) the original low-resolution light fields and the high-resolution light fields. Experiment results evaluate the enhanced depth maps using different super-resolution approaches.


2018 ◽  
Vol 4 (3) ◽  
pp. 406-418 ◽  
Author(s):  
Mandan Zhao ◽  
Gaochang Wu ◽  
Yipeng Li ◽  
Xiangyang Hao ◽  
Lu Fang ◽  
...  
Keyword(s):  

Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


2021 ◽  
Author(s):  
Zhen Cheng ◽  
Zhiwei Xiong ◽  
Chang Chen ◽  
Dong Liu ◽  
Zheng-Jun Zha
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document