Research of depth information acquisition with two stage structured light method

Author(s):  
Shaozhe Wang ◽  
Song Yang
2016 ◽  
Vol 28 (4) ◽  
pp. 523-532 ◽  
Author(s):  
Akihiro Obara ◽  
◽  
Xu Yang ◽  
Hiromasa Oku ◽  

[abstFig src='/00280004/10.jpg' width='300' text='Concept of SLF generated by two projectors' ] Triangulation is commonly used to restore 3D scenes, but its frame of less than 30 fps due to time-consuming stereo-matching is an obstacle for applications requiring that results be fed back in real time. The structured light field (SLF) our group proposed previously reduced the amount of calculation in 3D restoration, realizing high-speed measurement. Specifically, the SLF estimates depth information by projecting information on distance directly to a target. The SLF synthesized as reported, however, presents difficulty in extracting image features for depth estimation. In this paper, we propose synthesizing the SLF using two projectors with a certain layout. Our proposed SLF’s basic properties are based on an optical model. We evaluated the SLF’s performance using a prototype we developed and applied to the high-speed depth estimation of a target moving randomly at a speed of 1000 Hz. We demonstrate the target’s high-speed tracking based on high-speed depth information feedback.


1996 ◽  
Vol 13 (1) ◽  
pp. 187-197 ◽  
Author(s):  
Eric Hiris ◽  
Randolph Blake

AbstractA series of experiments investigated perceived direction of motion and depth segregation in motion transparency displays consisting of two planes of dots moving in different directions. Direction and depth judgments were obtained from human observers viewing these "bi-directional" animation sequences with and without explicit stereoscopic depth information. We found that (1) misperception of motion direction ("direction repulsion") occurs when two spatially intermingled directions of motion are within 60 deg of each other; (2) direction repulsion is minimal at cardinal directions; (3) perception of two directions of motion always results in separate motion planes segregated in depth; and (4) stereoscopic depth information has no effect on the magnitude of direction repulsion, but it does disambiguate the depth relations between motion directions. These results are developed within the context of a two-stage model of motion transparency. On this model, motion directions are registered within units subject to inhibitory interactions that cause direction repulsion, with the outputs of these units pooled within units selective for direction and disparity.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 639 ◽  
Author(s):  
Wenbin Li ◽  
Yaxin Li ◽  
Walid Darwish ◽  
Shengjun Tang ◽  
Yuling Hu ◽  
...  

Consumer-grade RGBD sensors that provide both colour and depth information have many potential applications, such as robotics control, localization, and mapping, due to their low cost and simple operation. However, the depth measurement provided by consumer-grade RGBD sensors is still inadequate for many high-precision applications, such as rich 3D reconstruction, accurate object recognition and precise localization, due to the fact that the systematic errors of RGB sensors increase exponentially with the ranging distance. Most existing calibration models for depth measurement must be carried out with different distances. In this paper, we reveal the mechanism of how an infrared (IR) camera and IR projector contribute to the overall non-centrosymmetric distortion of a structured light pattern-based RGBD sensor. Then, a new two-step calibration method for RGBD sensors based on the disparity measurement is proposed, which is range-independent and has full frame coverage. Three independent calibration models are used for the calibration for the three main components of the RGBD sensor errors: the infrared camera distortion, the infrared projection distortion, and the infrared cone-caused bias. Experiments show the proposed calibration method can provide precise calibration results in full-range and full-frame coverage of depth measurement. The offset in the edge area of long-range depth (8 m) is reduced from 86 cm to 30 cm, and the relative error is reduced from 11% to 3% of the range distance. Overall, at far range the proposed calibration method can improve the depth accuracy by 70% in the central region of depth frame and 65% in the edge region.


2014 ◽  
Vol 656 ◽  
pp. 378-387
Author(s):  
Fabio de Felice ◽  
Antonella Petrillo ◽  
Armando Carlomusto

Three dimensional (3-D) geometric shape measurements have found wide applications in the fields of industrial manufacturing, fast reverse engineering, quality control, biomedical sciences, etc. In the present paper we focus our attention on Reverse engineering. It starts from an existing product, and creates a CAD (Computer Aided Design) model, for modification or reproduction of its design. This kind of process is usually undertaken in order to redesign a system for better maintainability or to obtain a copy of a system without access to design from which it was originally produced. There are many different methods for acquiring shape data. Tactile methods represent a popular approach to shape capture. The most commonly known forms are Coordinate Measuring Machines (CMMs) and mechanical or robotic arms with a touch probe sensing device. Non-contact methods use light, sound or magnetic fields to acquire shape from objects. In the case of contact and non-contact, an appropriate analysis must be performed to determine the positions of points on objects surface. The aim of this paper is to present a new reconstruction method of three-dimensional measuring system for objects acquisition:reverse engineering based on structured light system. This technique consists in projecting a known pattern of pixels (structured light) on an object: the way in which this pattern is deformed encountering the object surfaces, allows vision systems (a couple of monochromatic digital cameras) to calculate depth information necessary for surfaces digitization. The subsequent application of this methodology has been to reengineer an aeronautical component that has changed over time and divergent from initial project. At last have been studied proposals and possible solutions that ensure an higher quality of manufactured products and a substantial savings in costs of production system of the product itself.


Author(s):  
Yi Zheng ◽  
Beiwen Li

Abstract In-situ inspection has drawn many attentions in manufacturing due to the importance of quality assurance. With the rapid growth of additive manufacturing technology, the importance of in-line/in-situ inspections has been raised to a higher level due to many uncertainties that could occur during an additive printing process. Given this, having accurate and robust in-situ monitoring can assist corrective actions for a closed-loop control of a manufacturing process. Contact 3D profilometers such as stylus profilometers or coordinate measuring machines can achieve very high accuracies. However, due to the requirement for physical contact, such methods have limited measurement speeds and may cause damage to the tested surface. Thus, contact methods are not quite suitable for real-time in-situ metrology. Non-contact methods include both passive and active methods. Passive methods (e.g., focus variation or stereo vision) hinges on image-based depth analysis, yet the accuracies of passive methods may be impacted by light conditions of the environment and the texture quality of the surface. Active 3D scanning methods such as laser scanning or structured light are suitable for instant quality inspection due to their ability to conduct a quick non-contact 3D scan of the entire surface of a workpiece. Specifically, the fringe projection technique, as a variation of the structured light technique, has demonstrated significant potential for real-time in-situ monitoring and inspection given its merits of conducting simultaneous high-speed (from 30 Hz real-time to kilohertz high speeds) and high accuracy (tens of μm) measurements. However, high-speed 3D scanning methods like fringe projection technique are typically based on triangulation principle, meaning that the depth information is retrieved by analyzing the triangulation relationship between the light emitter (i.e., projector), the image receiver (i.e., camera) and the tested sample surface. Such measurement scheme cannot reconstruct 3D surfaces where large geometrical variations are present, such as a deep-hole or a stair geometry. This is because large geometrical variations will block the auxiliary light used in the triangulation based methods, which will resultantly cause a shadowed area to occur. In this paper, we propose a uniaxial fringe projection technique to address such limitation. We measured a stair model using both conventional triangulation based fringe projection technique and the proposed method for comparison. Our experiment demonstrates that the proposed uniaxial fringe projection technique can perform high-speed 3D scanning without shadows appearing in the scene. Quantitative testing shows that an accuracy of 35 μm can be obtained by measuring a step-height object using the proposed uniaxial fringe projection system.


2015 ◽  
Vol 22 (8) ◽  
pp. 43-47 ◽  
Author(s):  
赵泓扬 ZHAO Hong-yang ◽  
姚文卿 YAO Wen-qing

Sign in / Sign up

Export Citation Format

Share Document