scholarly journals Metalens Eyepiece for 3D Holographic Near-Eye Display

Nanomaterials ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1920
Author(s):  
Chang Wang ◽  
Zeqing Yu ◽  
Qiangbo Zhang ◽  
Yan Sun ◽  
Chenning Tao ◽  
...  

Near-eye display (NED) systems for virtual reality (VR) and augmented reality (AR) have been rapidly developing; however, the widespread use of VR/AR devices is hindered by the bulky refractive and diffractive elements in the complicated optical system as well as the visual discomfort caused by excessive binocular parallax and accommodation-convergence conflict. To address these problems, an NED system combining a 5 mm diameter metalens eyepiece and a three-dimensional (3D), computer-generated holography (CGH) based on Fresnel diffraction is proposed in this paper. Metalenses have been extensively studied for their extraordinary capabilities at wavefront shaping at a subwavelength scale, their ultrathin compactness, and their significant advantages over conventional lenses. Thus, the introduction of the metalens eyepiece is likely to reduce the issue of bulkiness in NED systems. Furthermore, CGH has typically been regarded as the optimum solution for 3D displays to overcome limitations of binocular systems, since it can restore the whole light field of the target 3D scene. Experiments are carried out for this design, where a 5 mm diameter metalens eyepiece composed of silicon nitride anisotropic nanofins is fabricated with diffraction efficiency and field of view for a 532 nm incidence of 15.7% and 31°, respectively. Furthermore, a novel partitioned Fresnel diffraction and resample method is applied to simulate the wave propagations needed to produce the hologram, with the metalens capable of transforming the reconstructed 3D image into a virtual image for the NED. Our work combining metalens and CGH may pave the way for portable optical display devices in the future.

2019 ◽  
Vol 9 (10) ◽  
pp. 2118 ◽  
Author(s):  
Hao Zhang ◽  
Liangcai Cao ◽  
Guofan Jin

Holographic three-dimensional (3D) displays can reconstruct a whole wavefront of a 3D scene and provide rich depth information for the human eyes. Computer-generated holographic techniques offer an efficient way for reconstructing holograms without complicated interference recording systems. In this work, we present a technique for generating 3D computer-generated holograms (CGHs) with scalable samplings, by using layer-based diffraction calculations. The 3D scene is partitioned into multiple layers according to its depth image. Shifted Fresnel diffraction is used for calculating the wave diffractions from the partitioned layers to the CGH plane with adjustable sampling rates, while maintaining the depth information. The algorithm provides an effective method for scaling 3D CGHs without an optical zoom module in the holographic display system. Experiments have been performed, demonstrating that the proposed method can reconstruct quality 3D images at different scale factors.


2021 ◽  
Vol 24 (2) ◽  
pp. 41-48
Author(s):  
Maxim Yu. Ponamarev

In this work, it is shown that the image formed as a result of the passage of coherent radiation through the crystal has certain characteristic features. When the crystal is rotated with respect to the propagation axis of the investigated beam, the formation of the intensity distribution of a complex structure associated with the transformation of the flat image into volumetric was detected at the output. Crystalline plates can be used to form the distribution of a continuous flat light field in the implementation of a real 3D scene, which can provide a three-dimensional image on a television screen, as well as on a computer monitor screen. It can also be used in billboards. The three-dimensional image obtained in this way can be observed directly with the eyes of a person (without using special glasses. Thus, the information capacity of the image on the screen increases, and the perception of the picture approaches real conditions.


Author(s):  
Ying Yuan ◽  
Xiaorui Wang ◽  
Yang Yang ◽  
Hang Yuan ◽  
Chao Zhang ◽  
...  

Abstract The full-chain system performance characterization is very important for the optimization design of an integral imaging three-dimensional (3D) display system. In this paper, the acquisition and display processes of 3D scene will be treated as a complete light field information transmission process. The full-chain performance characterization model of an integral imaging 3D display system is established, which uses the 3D voxel, the image depth, and the field of view of the reconstructed images as the 3D display quality evaluation indicators. Unlike most of the previous research results using the ideal integral imaging model, the proposed full-chain performance characterization model considering the diffraction effect and optical aberration of the microlens array, the sampling effect of the detector, 3D image data scaling, and the human visual system, can accurately describe the actual 3D light field transmission and convergence characteristics. The relationships between key parameters of an integral imaging 3D display system and the 3D display quality evaluation indicators are analyzed and discussed by the simulation experiment. The results will be helpful for the optimization design of a high-quality integral imaging 3D display system.


2021 ◽  
Vol 11 (9) ◽  
pp. 3949
Author(s):  
Jiawei Sun ◽  
Nektarios Koukourakis ◽  
Jürgen W. Czarske

Wavefront shaping through a multi-core fiber (MCF) is turning into an attractive method for endoscopic imaging and optical cell-manipulation on a chip. However, the discrete distribution and the low number of cores induce pixelated phase modulation, becoming an obstacle for delivering complex light field distributions through MCFs. We demonstrate a novel phase retrieval algorithm named Core–Gerchberg–Saxton (Core-GS) employing the captured core distribution map to retrieve tailored modulation hologram for the targeted intensity distribution at the distal far-field. Complex light fields are reconstructed through MCFs with high fidelity up to 96.2%. Closed-loop control with experimental feedback denotes the capability of the Core-GS algorithm for precise intensity manipulation of the reconstructed light field. Core-GS provides a robust way for wavefront shaping through MCFs; it facilitates the MCF becoming a vital waveguide in endoscopic and lab-on-a-chip applications.


i-Perception ◽  
2017 ◽  
Vol 8 (1) ◽  
pp. 204166951668608 ◽  
Author(s):  
Ling Xia ◽  
Sylvia C. Pont ◽  
Ingrid Heynderick

Humans are able to estimate light field properties in a scene in that they have expectations of the objects’ appearance inside it. Previously, we probed such expectations in a real scene by asking whether a “probe object” fitted a real scene with regard to its lighting. But how well are observers able to interactively adjust the light properties on a “probe object” to its surrounding real scene? Image ambiguities can result in perceptual interactions between light properties. Such interactions formed a major problem for the “readability” of the illumination direction and diffuseness on a matte smooth spherical probe. We found that light direction and diffuseness judgments using a rough sphere as probe were slightly more accurate than when using a smooth sphere, due to the three-dimensional (3D) texture. We here extended the previous work by testing independent and simultaneous (i.e., the light field properties separated one by one or blended together) adjustments of light intensity, direction, and diffuseness using a rough probe. Independently inferred light intensities were close to the veridical values, and the simultaneously inferred light intensity interacted somewhat with the light direction and diffuseness. The independently inferred light directions showed no statistical difference with the simultaneously inferred directions. The light diffuseness inferences correlated with but contracted around medium veridical values. In summary, observers were able to adjust the basic light properties through both independent and simultaneous adjustments. The light intensity, direction, and diffuseness are well “readable” from our rough probe. Our method allows “tuning the light” (adjustment of its spatial distribution) in interfaces for lighting design or perception research.


Author(s):  
Wei Gao ◽  
Linjie Zhou ◽  
Lvfang Tao

View synthesis (VS) for light field images is a very time-consuming task due to the great quantity of involved pixels and intensive computations, which may prevent it from the practical three-dimensional real-time systems. In this article, we propose an acceleration approach for deep learning-based light field view synthesis, which can significantly reduce calculations by using compact-resolution (CR) representation and super-resolution (SR) techniques, as well as light-weight neural networks. The proposed architecture has three cascaded neural networks, including a CR network to generate the compact representation for original input views, a VS network to synthesize new views from down-scaled compact views, and a SR network to reconstruct high-quality views with full resolution. All these networks are jointly trained with the integrated losses of CR, VS, and SR networks. Moreover, due to the redundancy of deep neural networks, we use the efficient light-weight strategy to prune filters for simplification and inference acceleration. Experimental results demonstrate that the proposed method can greatly reduce the processing time and become much more computationally efficient with competitive image quality.


Sign in / Sign up

Export Citation Format

Share Document