scholarly journals Fast and Accurate Light Field View Synthesis by Optimizing Input View Selection

Micromachines ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 557
Author(s):  
Xingzheng Wang ◽  
Yongqiang Zan ◽  
Senlin You ◽  
Yuanlong Deng ◽  
Lihua Li

There is a trade-off between spatial resolution and angular resolution limits in light field applications; various targeted algorithms have been proposed to enhance angular resolution while ensuring high spatial resolution simultaneously, which is also called view synthesis. Among them, depth estimation-based methods can use only four corner views to reconstruct a novel view at an arbitrary location. However, depth estimation is a time-consuming process, and the quality of the reconstructed novel view is not only related to the number of the input views, but also the location of the input views. In this paper, we explore the relationship between different input view selections with the angular super-resolution reconstruction results. Different numbers and positions of input views are selected to compare the speed of super-resolution reconstruction and the quality of novel views. Experimental results show that the speed of the algorithm decreases with the increase of the input views for each novel view, and the quality of the novel view decreases with the increase of the distance from the input views. After comparison using two input views in the same line to reconstruct the novel views between them, fast and accurate light field view synthesis is achieved.

2020 ◽  
Vol 34 (07) ◽  
pp. 11141-11148 ◽  
Author(s):  
Jing Jin ◽  
Junhui Hou ◽  
Hui Yuan ◽  
Sam Kwong

The acquisition of light field images with high angular resolution is costly. Although many methods have been proposed to improve the angular resolution of a sparsely-sampled light field, they always focus on the light field with a small baseline, which is captured by a consumer light field camera. By making full use of the intrinsic geometry information of light fields, in this paper we propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline. Our model consists of two learnable modules and a physically-based module. Specifically, it includes a depth estimation module for explicitly modeling the scene geometry, a physically-based warping for novel views synthesis, and a light field blending module specifically designed for light field reconstruction. Moreover, we introduce a novel loss function to promote the preservation of the light field parallax structure. Experimental results over various light field datasets including large baseline light field images demonstrate the significant superiority of our method when compared with state-of-the-art ones, i.e., our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48×. In addition, our method preserves the light field parallax structure better.


2018 ◽  
Vol 15 (1) ◽  
pp. 172988141774844 ◽  
Author(s):  
Mandan Zhao ◽  
Gaochang Wu ◽  
Yebin Liu ◽  
Xiangyang Hao

With the development of consumer light field cameras, the light field imaging has become an extensively used method for capturing the three-dimensional appearance of a scene. The depth estimation often requires a dense sampled light field in the angular domain or a high resolution in the spatial domain. However, there is an inherent trade-off between the angular and spatial resolutions of the light field. Recently, some studies for super-resolving the trade-off light field have been introduced. Rather than the conventional approaches that optimize the depth maps, these approaches focus on maximizing the quality of the super-resolved light field. In this article, we investigate how the depth estimation can benefit from these super-resolution methods. Specifically, we compare the qualities of the estimated depth using (a) the original sparse sampled light fields and the reconstructed dense sampled light fields, and (b) the original low-resolution light fields and the high-resolution light fields. Experiment results evaluate the enhanced depth maps using different super-resolution approaches.


Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4019
Author(s):  
Ke Zhang ◽  
Cankun Yang ◽  
Xiaojuan Li ◽  
Chunping Zhou ◽  
Ruofei Zhong

To realize the application of super-resolution technology from theory to practice, and to improve microsatellite spatial resolution, we propose a special super-resolution algorithm based on the multi-modality super-CMOS sensor which can adapt to the limited operation capacity of microsatellite computers. First, we designed an oblique sampling mode with the sensor rotated at an angle of 26.56 ∘ ( arctan 1 2 ) to obtain high overlap ratio images with sub-pixel displacement. Secondly, the proposed super-resolution algorithm was applied to reconstruct the final high-resolution image. Because the satellite equipped with this sensor is scheduled to be launched this year, we also designed the simulation mode of conventional sampling and the oblique sampling of the sensor to obtain the comparison and experimental data. Lastly, we evaluated the super-resolution quality of images, the effectiveness, the practicality, and the efficiency of the algorithm. The results of the experiments showed that the satellite-using super-resolution algorithm combined with multi-modality super-CMOS sensor oblique-mode sampling can increase the spatial resolution of an image by about 2 times. The algorithm is simple and highly efficient, and can realize the super-resolution reconstruction of two remote-sensing images within 0.713 s, which has good performance on the microsatellite.


Sign in / Sign up

Export Citation Format

Share Document