scholarly journals Computational Super-Resolution Full-Parallax Three-Dimensional Light Field Display Based on Dual-Layer LCD Modulation

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 81045-81054
Author(s):  
Peng Wang ◽  
Xinzhu Sang ◽  
Duo Chen ◽  
Binbin Yan
Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


2018 ◽  
Vol 15 (1) ◽  
pp. 172988141774844 ◽  
Author(s):  
Mandan Zhao ◽  
Gaochang Wu ◽  
Yebin Liu ◽  
Xiangyang Hao

With the development of consumer light field cameras, the light field imaging has become an extensively used method for capturing the three-dimensional appearance of a scene. The depth estimation often requires a dense sampled light field in the angular domain or a high resolution in the spatial domain. However, there is an inherent trade-off between the angular and spatial resolutions of the light field. Recently, some studies for super-resolving the trade-off light field have been introduced. Rather than the conventional approaches that optimize the depth maps, these approaches focus on maximizing the quality of the super-resolved light field. In this article, we investigate how the depth estimation can benefit from these super-resolution methods. Specifically, we compare the qualities of the estimated depth using (a) the original sparse sampled light fields and the reconstructed dense sampled light fields, and (b) the original low-resolution light fields and the high-resolution light fields. Experiment results evaluate the enhanced depth maps using different super-resolution approaches.


Author(s):  
Ying Yuan ◽  
Xiaorui Wang ◽  
Yang Yang ◽  
Hang Yuan ◽  
Chao Zhang ◽  
...  

Abstract The full-chain system performance characterization is very important for the optimization design of an integral imaging three-dimensional (3D) display system. In this paper, the acquisition and display processes of 3D scene will be treated as a complete light field information transmission process. The full-chain performance characterization model of an integral imaging 3D display system is established, which uses the 3D voxel, the image depth, and the field of view of the reconstructed images as the 3D display quality evaluation indicators. Unlike most of the previous research results using the ideal integral imaging model, the proposed full-chain performance characterization model considering the diffraction effect and optical aberration of the microlens array, the sampling effect of the detector, 3D image data scaling, and the human visual system, can accurately describe the actual 3D light field transmission and convergence characteristics. The relationships between key parameters of an integral imaging 3D display system and the 3D display quality evaluation indicators are analyzed and discussed by the simulation experiment. The results will be helpful for the optimization design of a high-quality integral imaging 3D display system.


Nanomaterials ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1920
Author(s):  
Chang Wang ◽  
Zeqing Yu ◽  
Qiangbo Zhang ◽  
Yan Sun ◽  
Chenning Tao ◽  
...  

Near-eye display (NED) systems for virtual reality (VR) and augmented reality (AR) have been rapidly developing; however, the widespread use of VR/AR devices is hindered by the bulky refractive and diffractive elements in the complicated optical system as well as the visual discomfort caused by excessive binocular parallax and accommodation-convergence conflict. To address these problems, an NED system combining a 5 mm diameter metalens eyepiece and a three-dimensional (3D), computer-generated holography (CGH) based on Fresnel diffraction is proposed in this paper. Metalenses have been extensively studied for their extraordinary capabilities at wavefront shaping at a subwavelength scale, their ultrathin compactness, and their significant advantages over conventional lenses. Thus, the introduction of the metalens eyepiece is likely to reduce the issue of bulkiness in NED systems. Furthermore, CGH has typically been regarded as the optimum solution for 3D displays to overcome limitations of binocular systems, since it can restore the whole light field of the target 3D scene. Experiments are carried out for this design, where a 5 mm diameter metalens eyepiece composed of silicon nitride anisotropic nanofins is fabricated with diffraction efficiency and field of view for a 532 nm incidence of 15.7% and 31°, respectively. Furthermore, a novel partitioned Fresnel diffraction and resample method is applied to simulate the wave propagations needed to produce the hologram, with the metalens capable of transforming the reconstructed 3D image into a virtual image for the NED. Our work combining metalens and CGH may pave the way for portable optical display devices in the future.


Sign in / Sign up

Export Citation Format

Share Document