scholarly journals Error correction of depth images for multiview time-of-flight vision sensors

2020 ◽  
Vol 17 (4) ◽  
pp. 172988142094237
Author(s):  
Yu He ◽  
Shengyong Chen

The developing time-of-flight (TOF) camera is an attractive device for the robot vision system to capture real-time three-dimensional (3D) images, but the sensor suffers from the limit of low resolution and precision of images. This article proposes an approach to automatic generation of an imaging model in the 3D space for error correction. Through observation data, an initial coarse model of the depth image can be obtained for each TOF camera. Then, its accuracy is improved by an optimization method. Experiments are carried out using three TOF cameras. Results show that the accuracy is dramatically improved by the spatial correction model.

2018 ◽  
Vol 7 (4.33) ◽  
pp. 487
Author(s):  
Mohamad Haniff Harun ◽  
Mohd Shahrieel Mohd Aras ◽  
Mohd Firdaus Mohd Ab Halim ◽  
Khalil Azha Mohd Annuar ◽  
Arman Hadi Azahar ◽  
...  

This investigation is solely on the adaptation of a vision system algorithm to classify the processes to regulate the decision making related to the tasks and defect’s recognition. These idea stresses on the new method on vision algorithm which is focusing on the shape matching properties to classify defects occur on the product. The problem faced before that the system required to process broad data acquired from the object caused the time and efficiency slightly decrease. The propose defect detection approach combine with Region of Interest, Gaussian smoothing, Correlation and Template Matching are introduced. This application provides high computational savings and results in better recognition rate about 95.14%. The defects occur provides with information of the height which corresponds by the z-coordinate, length which corresponds by the y-coordinate and width which corresponds by the x-coordinate. This data gathered from the proposed system using dual camera for executing the three dimensional transformation.  


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 664
Author(s):  
Zhihong Ma ◽  
Dawei Sun ◽  
Haixia Xu ◽  
Yueming Zhu ◽  
Yong He ◽  
...  

Three-dimensional (3D) structure is an important morphological trait of plants for describing their growth and biotic/abiotic stress responses. Various methods have been developed for obtaining 3D plant data, but the data quality and equipment costs are the main factors limiting their development. Here, we propose a method to improve the quality of 3D plant data using the time-of-flight (TOF) camera Kinect V2. A K-dimension (k-d) tree was applied to spatial topological relationships for searching points. Background noise points were then removed with a minimum oriented bounding box (MOBB) with a pass-through filter, while outliers and flying pixel points were removed based on viewpoints and surface normals. After being smoothed with the bilateral filter, the 3D plant data were registered and meshed. We adjusted the mesh patches to eliminate layered points. The results showed that the patches were closer. The average distance between the patches was 1.88 × 10−3 m, and the average angle was 17.64°, which were 54.97% and 48.33% of those values before optimization. The proposed method performed better in reducing noise and the local layered-points phenomenon, and it could help to more accurately determine 3D structure parameters from point clouds and mesh models.


Author(s):  
L. Hoegner ◽  
A. Hanel ◽  
M. Weinmann ◽  
B. Jutzi ◽  
S. Hinz ◽  
...  

Obtaining accurate 3d descriptions in the thermal infrared (TIR) is a quite challenging task due to the low geometric resolutions of TIR cameras and the low number of strong features in TIR images. Combining the radiometric information of the thermal infrared with 3d data from another sensor is able to overcome most of the limitations in the 3d geometric accuracy. In case of dynamic scenes with moving objects or a moving sensor system, a combination with RGB cameras of Time-of-Flight (TOF) cameras is suitable. As a TOF camera is an active sensor in the near infrared (NIR) and the thermal infrared camera captures the radiation emitted by the objects in the observed scene, the combination of these two sensors for close range applications is independent from external illumination or textures in the scene. This article is focused on the fusion of data acquired both with a time-of-flight (TOF) camera and a thermal infrared (TIR) camera. As the radiometric behaviour of many objects differs between the near infrared used by the TOF camera and the thermal infrared spectrum, a direct co-registration with feature points in both intensity images leads to a high number of outliers. A fully automatic workflow of the geometric calibration of both cameras and the relative orientation of the camera system with one calibration pattern usable for both spectral bands is presented. Based on the relative orientation, a fusion of the TOF depth image and the TIR image is used for scene segmentation and people detection. An adaptive histogram based depth level segmentation of the 3d point cloud is combined with a thermal intensity based segmentation. The feasibility of the proposed method is demonstrated in an experimental setup with different geometric and radiometric influences that show the benefit of the combination of TOF intensity and depth images and thermal infrared images.


Author(s):  
Sarah Morris ◽  
Ari Goldman ◽  
Brian Thurow

Time of Flight (ToF) cameras are a type of range-imaging camera that provides three-dimensional scene information from a single camera. This paper assesses the ability of ToF technology to be used for threedimensional particle tracking velocimetry (3D-PTV). Using a commercially available ToF camera various aspects of 3D-PTV are considered, including: minimum resolvable particle size, environmental factors (reflections and refractive index changes) and time resolution. Although it is found that an off-the-shelf ToF camera is not a viable alternative to traditional 3D-PTV measurement systems, basic 3D-PTV measurements are shown with large (6mm) particles in both air and water to demonstrate future potential use as this technology develops. A summary of necessary technological advances is also discussed.


1987 ◽  
Vol 31 (11) ◽  
pp. 1281-1285
Author(s):  
John G. Kreifeldt ◽  
Ming C. Chuang

A description of a novel and very speculative approach to new research directions for human vision with application to robotic vision is described. The goal of the approach is to propose a plausible, implementable, spatial perception model for human vision and apply this model to a stereo robot vision system. The model is based on computer algorithms variously called “Multidimensional Scaling”, well known in psychology and sociology but relatively unknown in engineering. These algorithms can reconstruct a spatially accurate model to a high level of metric precision of a “configuration of points” from low quality, error prone non-metric data about the configuration. ALSCAL – a general purpose computer package adaptable for this purpose is being presently evaluated. This is a departure from typical engineering approaches which are directed toward gathering a low volume of highly precise referenced data about the positions of selected points in the visual scene and substitutes instead an approach of gathering a high volume of very low precision relative data about the interpoint spacings. It would seem that the latter approach is the one actually used by the human vision system. The results are highly encouraging in that the agreement between test configurations of two and three dimensional configurations of points are very faithfully reconstructed from as low as 10 points in a configuration using only rank ordered (i.e. nonmetric) information about interpoint spacings. The reconstructions are remarkably robust even under human-like “fuzzy” imprecision in visual measurements.


2018 ◽  
Vol 2018 (13) ◽  
pp. 464-1-464-6
Author(s):  
Yunseok Song ◽  
Yo-Sung Ho

Sign in / Sign up

Export Citation Format

Share Document