Three-Dimensional TOF Robot Vision System

2009 ◽  
Vol 58 (1) ◽  
pp. 141-146 ◽  
Author(s):  
S. Hussmann ◽  
T. Liepert
2018 ◽  
Vol 7 (4.33) ◽  
pp. 487
Author(s):  
Mohamad Haniff Harun ◽  
Mohd Shahrieel Mohd Aras ◽  
Mohd Firdaus Mohd Ab Halim ◽  
Khalil Azha Mohd Annuar ◽  
Arman Hadi Azahar ◽  
...  

This investigation is solely on the adaptation of a vision system algorithm to classify the processes to regulate the decision making related to the tasks and defect’s recognition. These idea stresses on the new method on vision algorithm which is focusing on the shape matching properties to classify defects occur on the product. The problem faced before that the system required to process broad data acquired from the object caused the time and efficiency slightly decrease. The propose defect detection approach combine with Region of Interest, Gaussian smoothing, Correlation and Template Matching are introduced. This application provides high computational savings and results in better recognition rate about 95.14%. The defects occur provides with information of the height which corresponds by the z-coordinate, length which corresponds by the y-coordinate and width which corresponds by the x-coordinate. This data gathered from the proposed system using dual camera for executing the three dimensional transformation.  


2020 ◽  
Vol 17 (4) ◽  
pp. 172988142094237
Author(s):  
Yu He ◽  
Shengyong Chen

The developing time-of-flight (TOF) camera is an attractive device for the robot vision system to capture real-time three-dimensional (3D) images, but the sensor suffers from the limit of low resolution and precision of images. This article proposes an approach to automatic generation of an imaging model in the 3D space for error correction. Through observation data, an initial coarse model of the depth image can be obtained for each TOF camera. Then, its accuracy is improved by an optimization method. Experiments are carried out using three TOF cameras. Results show that the accuracy is dramatically improved by the spatial correction model.


1987 ◽  
Vol 31 (11) ◽  
pp. 1281-1285
Author(s):  
John G. Kreifeldt ◽  
Ming C. Chuang

A description of a novel and very speculative approach to new research directions for human vision with application to robotic vision is described. The goal of the approach is to propose a plausible, implementable, spatial perception model for human vision and apply this model to a stereo robot vision system. The model is based on computer algorithms variously called “Multidimensional Scaling”, well known in psychology and sociology but relatively unknown in engineering. These algorithms can reconstruct a spatially accurate model to a high level of metric precision of a “configuration of points” from low quality, error prone non-metric data about the configuration. ALSCAL – a general purpose computer package adaptable for this purpose is being presently evaluated. This is a departure from typical engineering approaches which are directed toward gathering a low volume of highly precise referenced data about the positions of selected points in the visual scene and substitutes instead an approach of gathering a high volume of very low precision relative data about the interpoint spacings. It would seem that the latter approach is the one actually used by the human vision system. The results are highly encouraging in that the agreement between test configurations of two and three dimensional configurations of points are very faithfully reconstructed from as low as 10 points in a configuration using only rank ordered (i.e. nonmetric) information about interpoint spacings. The reconstructions are remarkably robust even under human-like “fuzzy” imprecision in visual measurements.


1983 ◽  
Vol 16 (20) ◽  
pp. 337-341
Author(s):  
V.M. Grishkin ◽  
F.M. Kulakov

2021 ◽  
pp. 004051752098238
Author(s):  
Siyuan Li ◽  
Zhongde Shan ◽  
Dong Du ◽  
Li Zhan ◽  
Zhikun Li ◽  
...  

Three-dimensional composite preform is the main structure of fiber-reinforced composites. During the weaving process of large-sized three-dimensional composite preform, relative rotation or translation between the fiber feeder and guided array occurs before feeding. Besides, the weaving needles can be at different heights after moving out from the guided array. These problems are mostly detected and adjusted manually. To make the weaving process more precise and efficient, we propose machine vision-based methods which could realize accurate estimation and adjustment of the relative position-pose between the fiber feeder and guided array, and make the needles pressing process automatic by recognizing the position of the weaving needles. The results show that the estimation error of relative position-pose is within 5%, and the rate of unrecognized weaving needles is 2%. Our proposed methods improve the automation level of weaving, and are conducive to the development of preform forming toward digital manufacturing.


1986 ◽  
Vol 16 (4) ◽  
pp. 582-589 ◽  
Author(s):  
Lorenz A. Schmitt ◽  
William A. Gruver ◽  
Assad Ansari

1989 ◽  
Author(s):  
Keiichi Kemmotsu ◽  
Yuichi Sasano ◽  
Katsumi Oshitani

Sign in / Sign up

Export Citation Format

Share Document