Acceleration of color computer-generated hologram from three-dimensional scenes with texture and depth information

Author(s):  
Tomoyoshi Shimobaba ◽  
Takashi Kakue ◽  
Tomoyoshi Ito
2020 ◽  
Vol 6 (2) ◽  
pp. eaay6036 ◽  
Author(s):  
R. C. Feord ◽  
M. E. Sumner ◽  
S. Pusdekar ◽  
L. Kalra ◽  
P. T. Gonzalez-Bellido ◽  
...  

The camera-type eyes of vertebrates and cephalopods exhibit remarkable convergence, but it is currently unknown whether the mechanisms for visual information processing in these brains, which exhibit wildly disparate architecture, are also shared. To investigate stereopsis in a cephalopod species, we affixed “anaglyph” glasses to cuttlefish and used a three-dimensional perception paradigm. We show that (i) cuttlefish have also evolved stereopsis (i.e., the ability to extract depth information from the disparity between left and right visual fields); (ii) when stereopsis information is intact, the time and distance covered before striking at a target are shorter; (iii) stereopsis in cuttlefish works differently to vertebrates, as cuttlefish can extract stereopsis cues from anticorrelated stimuli. These findings demonstrate that although there is convergent evolution in depth computation, cuttlefish stereopsis is likely afforded by a different algorithm than in humans, and not just a different implementation.


2013 ◽  
Vol 319 ◽  
pp. 343-347
Author(s):  
Ru Ting Xia ◽  
Xiao Yan Zhou

This research aimed to reveal characteristics of visual attention of low-vision drivers. Near and far stimuli were used by means of a three-dimensional (3D) attention measurement system that simulated traffic environment. We measured the reaction time of subjects while attention shifted in three kinds of imitational peripheral environment illuminance (daylight, twilight and dawn conditions). Subjects were required to judge whether the target presented nearer than fixation point or further than it. The results showed that the peripheral environment illuminance had evident influence on the reaction time of drivers, the reaction time was slow in dawn and twilight conditions than in daylight condition, distribution of attention had the advantage in nearer space than farther space, that is, and the shifts of attention in 3D space had an anisotropy characteristic in depth. The results suggested that (1) visual attention might be operated with both precueing paradigm and stimulus controls included the depth information, (2) an anisotropy characteristic of attention shifting depend on the attention moved distance, and it showed remarkably in dawn condition than in daylight and twilight conditions.


10.29007/72d4 ◽  
2018 ◽  
Author(s):  
He Liu ◽  
Edouard Auvinet ◽  
Joshua Giles ◽  
Ferdinando Rodriguez Y Baena

Computer Aided Surgery (CAS) is helpful, but it clutters an already overcrowded operating theatre, and tends to disrupt the workflow of conventional surgery. In order to provide seamless computer assistance with improved immersion and a more natural surgical workflow, we propose an augmented-reality based navigation system for CAS. Here, we choose to focus on the proximal femoral anatomy, which we register to a plan by processing depth information of the surgical site captured by a commercial depth camera. Intra-operative three-dimensional surgical guidance is then provided to the surgeon through a commercial augmented reality headset, to drill a pilot hole in the femoral head, so that the user can perform the operation without additional physical guides. The user can interact intuitively with the system by simple gestures and voice commands, resulting in a more natural workflow. To assess the surgical accuracy of the proposed setup, 30 experiments of pilot hole drilling were performed on femur phantoms. The position and the orientation of the drilled guide holes were measured and compared with the preoperative plan, and the mean errors were within 2mm and 2°, results which are in line with commercial computer assisted orthopedic systems today.


Author(s):  
Dimitrios Chrysostomou ◽  
Antonios Gasteratos

The production of 3D models has been a popular research topic already for a long time, and important progress has been made since the early days. During the last decades, vision systems have established to become the standard and one of the most efficient sensorial assets in industrial and everyday applications. Due to the fact that vision provides several vital attributes, many applications tend to use novel vision systems into domestic, working, industrial, and any other environments. To achieve such goals, a vision system should robustly and effectively reconstruct the 3D surface and the working space. This chapter discusses different methods for capturing the three-dimensional surface of a scene. Geometric approaches to three-dimensional scene reconstruction are generally based on the knowledge of the scene structure from the camera’s internal and external parameters. Another class of methods encompasses the photometric approaches, which evaluate the pixels’ intensity to understand the three-dimensional scene structure. The third and final category of approaches, the so-called real aperture approaches, includes methods that use the physical properties of the visual sensors for image acquisition in order to reproduce the depth information of a scene.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 3008 ◽  
Author(s):  
Zhe Liu ◽  
Zhaozong Meng ◽  
Nan Gao ◽  
Zonghua Zhang

Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.


2011 ◽  
Vol 18 (4) ◽  
pp. 569-574 ◽  
Author(s):  
Masato Hoshino ◽  
Kentaro Uesugi ◽  
James Pearson ◽  
Takashi Sonobe ◽  
Mikiyasu Shirai ◽  
...  

An X-ray stereo imaging system with synchrotron radiation was developed at BL20B2, SPring-8. A portion of a wide X-ray beam was Bragg-reflected by a silicon crystal to produce an X-ray beam which intersects with the direct X-ray beam. Samples were placed at the intersection point of the two beam paths. X-ray stereo images were recorded simultaneously by a detector with a large field of view placed close to the sample. A three-dimensional wire-frame model of a sample was created from the depth information that was obtained from the lateral positions in the stereo image. X-ray stereo angiography of a mouse femoral region was performed as a demonstration of real-time stereo imaging. Three-dimensional arrangements of the femur and blood vessels were obtained.


2018 ◽  
Vol 8 (10) ◽  
pp. 1930 ◽  
Author(s):  
Lina Wang ◽  
Chengdong Wang ◽  
Yong Chen

Time-frequency analysis is usually used to reveal the appearance of different frequency components varying with time, in signals, of which time-frequency spectrogram is an important visual tool to display the information. The Mesh Surface Generation (MSG) algorithm is widely used in three-dimensional (3D) modeling. Removing hidden lines from the mesh plot is an essential process that produces explicit depth information. In this paper, a fast and effective method has been proposed for a time-frequency Spectrogram Mesh Surface Generation (SMSG) display, especially, based on the painter’s algorithm. In addition, most portable fault diagnosis devices have little function to generate a 3D spectrogram, which generally needs a general computer to realize the complex time-frequency analysis algorithms and a 3D display. However, general computer is not portable and then not suitable for field test. Hence, the proposed SMSG algorithm is applied to an embedded fault diagnosis device, which is light, low-cost, and real-time. The experimental results show that this approach can realize a high degree of accuracy and save considerable time.


Sign in / Sign up

Export Citation Format

Share Document