Three-dimensional temperature reconstruction of diffusion flame from the light-field convolution imaging by the focused plenoptic camera

Author(s):  
JingWen Shi ◽  
Hong Qi ◽  
ZhiQiang Yu ◽  
XiangYang An ◽  
YaTao Ren ◽  
...  
Author(s):  
Ying Yuan ◽  
Xiaorui Wang ◽  
Yang Yang ◽  
Hang Yuan ◽  
Chao Zhang ◽  
...  

Abstract The full-chain system performance characterization is very important for the optimization design of an integral imaging three-dimensional (3D) display system. In this paper, the acquisition and display processes of 3D scene will be treated as a complete light field information transmission process. The full-chain performance characterization model of an integral imaging 3D display system is established, which uses the 3D voxel, the image depth, and the field of view of the reconstructed images as the 3D display quality evaluation indicators. Unlike most of the previous research results using the ideal integral imaging model, the proposed full-chain performance characterization model considering the diffraction effect and optical aberration of the microlens array, the sampling effect of the detector, 3D image data scaling, and the human visual system, can accurately describe the actual 3D light field transmission and convergence characteristics. The relationships between key parameters of an integral imaging 3D display system and the 3D display quality evaluation indicators are analyzed and discussed by the simulation experiment. The results will be helpful for the optimization design of a high-quality integral imaging 3D display system.


Nanomaterials ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1920
Author(s):  
Chang Wang ◽  
Zeqing Yu ◽  
Qiangbo Zhang ◽  
Yan Sun ◽  
Chenning Tao ◽  
...  

Near-eye display (NED) systems for virtual reality (VR) and augmented reality (AR) have been rapidly developing; however, the widespread use of VR/AR devices is hindered by the bulky refractive and diffractive elements in the complicated optical system as well as the visual discomfort caused by excessive binocular parallax and accommodation-convergence conflict. To address these problems, an NED system combining a 5 mm diameter metalens eyepiece and a three-dimensional (3D), computer-generated holography (CGH) based on Fresnel diffraction is proposed in this paper. Metalenses have been extensively studied for their extraordinary capabilities at wavefront shaping at a subwavelength scale, their ultrathin compactness, and their significant advantages over conventional lenses. Thus, the introduction of the metalens eyepiece is likely to reduce the issue of bulkiness in NED systems. Furthermore, CGH has typically been regarded as the optimum solution for 3D displays to overcome limitations of binocular systems, since it can restore the whole light field of the target 3D scene. Experiments are carried out for this design, where a 5 mm diameter metalens eyepiece composed of silicon nitride anisotropic nanofins is fabricated with diffraction efficiency and field of view for a 532 nm incidence of 15.7% and 31°, respectively. Furthermore, a novel partitioned Fresnel diffraction and resample method is applied to simulate the wave propagations needed to produce the hologram, with the metalens capable of transforming the reconstructed 3D image into a virtual image for the NED. Our work combining metalens and CGH may pave the way for portable optical display devices in the future.


i-Perception ◽  
2017 ◽  
Vol 8 (1) ◽  
pp. 204166951668608 ◽  
Author(s):  
Ling Xia ◽  
Sylvia C. Pont ◽  
Ingrid Heynderick

Humans are able to estimate light field properties in a scene in that they have expectations of the objects’ appearance inside it. Previously, we probed such expectations in a real scene by asking whether a “probe object” fitted a real scene with regard to its lighting. But how well are observers able to interactively adjust the light properties on a “probe object” to its surrounding real scene? Image ambiguities can result in perceptual interactions between light properties. Such interactions formed a major problem for the “readability” of the illumination direction and diffuseness on a matte smooth spherical probe. We found that light direction and diffuseness judgments using a rough sphere as probe were slightly more accurate than when using a smooth sphere, due to the three-dimensional (3D) texture. We here extended the previous work by testing independent and simultaneous (i.e., the light field properties separated one by one or blended together) adjustments of light intensity, direction, and diffuseness using a rough probe. Independently inferred light intensities were close to the veridical values, and the simultaneously inferred light intensity interacted somewhat with the light direction and diffuseness. The independently inferred light directions showed no statistical difference with the simultaneously inferred directions. The light diffuseness inferences correlated with but contracted around medium veridical values. In summary, observers were able to adjust the basic light properties through both independent and simultaneous adjustments. The light intensity, direction, and diffuseness are well “readable” from our rough probe. Our method allows “tuning the light” (adjustment of its spatial distribution) in interfaces for lighting design or perception research.


Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


2016 ◽  
Vol 371 ◽  
pp. 166-172 ◽  
Author(s):  
Songlin Xie ◽  
Peng Wang ◽  
Xinzhu Sang ◽  
Chenyu Li ◽  
Wenhua Dou ◽  
...  

2019 ◽  
Vol 27 (17) ◽  
pp. 24624
Author(s):  
Duo Chen ◽  
Xinzhu Sang ◽  
Peng Wang ◽  
Xunbo Yu ◽  
Binbin Yan ◽  
...  

2020 ◽  
Vol 28 (19) ◽  
pp. 27293
Author(s):  
N. Yu Kuznetsov ◽  
K. S. Grigoriev ◽  
Yu V. Vladimirova ◽  
V. A. Makarov

Sign in / Sign up

Export Citation Format

Share Document