Range image generator including robot motion

Robotica ◽  
2005 ◽  
Vol 24 (1) ◽  
pp. 113-123
Author(s):  
Mayra Garduño Gaffare ◽  
Bertrand Vachon ◽  
Armando Segovia de los Ríos

The system here described has the capability of generating range images that include robot motion. The system has two main modules, the motion and the image generator. Motion is modeled using a Bezier's curve method. To compute a range value corresponding to a pixel image, the robot position in the coordinated system is obtained from trajec-tory generation. In this way, distortion is produced in the image, or sequence of images, as a consequence of motion. The obtained range images represent scenes perceived by the robot from a specific location or during a specified dis-placement in a very “real” view.

2003 ◽  
Vol 15 (3) ◽  
pp. 322-330 ◽  
Author(s):  
Jun Masaki ◽  
◽  
Nobuhiro Okada ◽  
Eiji Kondo

A range finder realizable with an easy circuit composition is proposed. The range finder is based on the slit-ray projection method. In the system, positions of slit-ray images on an image plane are detected by using pattern masks electronically provided to the image plane. Due to using the electronic pattern masks, the range finder realizes high cost performance, highspeed measurement and small size. In order to estimate measurement speed, a prototype circuit has been developed. The experimental results obtained by the circuit have indicated that the range finder will be able to take a range image of which resolution is 64 × 64 and more in 1/30[s] or less in the future. A prototype range finder which has a 32 × 32 photo diode array and a laser slit marker has been also developed, and range images have been actually taken using it. In this paper, with emphasis on indicating the availability of the proposed method, the range finder system and experimental results by the prototype circuit and the prototype range finder will be shown.


Author(s):  
SUCHENDRA M. BHANDARKAR

A surface feature hypergraph (SFAHG) representation is proposed for the recognition and localization of three-dimensional objects. The hypergraph representation is shown to be viewpoint independent thus resulting in substantial savings in terms of memory for the object model database. The resulting hypergraph matching algorithm integrates both, relational and the rigid pose constraint in a consistent unified manner. The matching algorithm is also shown to have a polynomial order of complexity even in multiple-object scenes with instances of objects partially occluding each other. An algorithm for incrementally constructing the hypergraph representation of an object model from range images of the object taken from different viewpoints is also presented. The hypergraph matching and the hypergraph construction algorithms are shown to be capable of correcting errors in the initial segmentation of the range image. The hypergraph construction algorithm and the matching algorithm are tested on range images of scenes containing multiple three-dimensional objects with partial occlusion.


Author(s):  
Y. Liang ◽  
Y. H. Sheng

To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence,especially in restoring the contour information of the facade of buildings.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7024
Author(s):  
Marcos Alonso ◽  
Daniel Maestro ◽  
Alberto Izaguirre ◽  
Imanol Andonegui ◽  
Manuel Graña

Surface flatness assessment is necessary for quality control of metal sheets manufactured from steel coils by roll leveling and cutting. Mechanical-contact-based flatness sensors are being replaced by modern laser-based optical sensors that deliver accurate and dense reconstruction of metal sheet surfaces for flatness index computation. However, the surface range images captured by these optical sensors are corrupted by very specific kinds of noise due to vibrations caused by mechanical processes like degreasing, cleaning, polishing, shearing, and transporting roll systems. Therefore, high-quality flatness optical measurement systems strongly depend on the quality of image denoising methods applied to extract the true surface height image. This paper presents a deep learning architecture for removing these specific kinds of noise from the range images obtained by a laser based range sensor installed in a rolling and shearing line, in order to allow accurate flatness measurements from the clean range images. The proposed convolutional blind residual denoising network (CBRDNet) is composed of a noise estimation module and a noise removal module implemented by specific adaptation of semantic convolutional neural networks. The CBRDNet is validated on both synthetic and real noisy range image data that exhibit the most critical kinds of noise that arise throughout the metal sheet production process. Real data were obtained from a single laser line triangulation flatness sensor installed in a roll leveling and cut to length line. Computational experiments over both synthetic and real datasets clearly demonstrate that CBRDNet achieves superior performance in comparison to traditional 1D and 2D filtering methods, and state-of-the-art CNN-based denoising techniques. The experimental validation results show a reduction in error than can be up to 15% relative to solutions based on traditional 1D and 2D filtering methods and between 10% and 3% relative to the other deep learning denoising architectures recently reported in the literature.


2011 ◽  
Vol 08 (04) ◽  
pp. 273-280
Author(s):  
YUXIANG YANG ◽  
ZENGFU WANG

This paper describes a successful application of Matting Laplacian Matrix to the problem of generating high-resolution range images. The Matting Laplacian Matrix in this paper exploits the fact that discontinuities in range and coloring tend to co-align, which enables us to generate high-resolution range image by integrating regular camera image into the range data. Using one registered and potentially high-resolution camera image as reference, we iteratively refine the input low-resolution range image, in terms of both spatial resolution and depth precision. We show that by using such a Matting Laplacian Matrix, we can get high-quality high-resolution range images.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5185
Author(s):  
Yu Zhai ◽  
Jieyu Lei ◽  
Wenze Xia ◽  
Shaokun Han ◽  
Fei Liu ◽  
...  

This work introduces a super-resolution (SR) algorithm for range images on the basis of self-guided joint filtering (SGJF), adding the range information of the range image as a coefficient of the filter to reduce the influence of the intensity image texture on the super-resolved image. A range image SR recognition system is constructed to study the effect of four SR algorithms including the SGJF algorithm on the recognition of the laser radar (ladar) range image. The effects of different model library sizes, SR algorithms, SR factors and noise conditions on the recognition are tested via experiments. Results demonstrate that all tested SR algorithms can improve the recognition rate of low-resolution (low-res) range images to varying degrees and the proposed SGJF algorithm has a very good comprehensive recognition performance. Finally, suggestions for the use of SR algorithms in actual scene recognition are proposed on the basis of the experimental results.


Author(s):  
JIAN WANG ◽  
ZHEN-QIANG YAO ◽  
QUAN-ZHANG AN ◽  
YAO-JIE ZHU ◽  
XUE-PING ZHANG ◽  
...  

Edge detection is often regarded as a basic step in range image processing by virtue of its crucial effect. The majority of existing edge detection methods cannot satisfy the requirement of efficiency in many industrial applications due to huge computational costs. In this paper, a novel instantaneous method, named RIDED-2D is proposed for denoising and edge detection for 2D scan line in range images. In the method, silhouettes of 2D scan line are classified into eight types by defining a few new coefficients. Several discriminant criteria on large noise filtering and edge detection are stipulated based on qualitative feature analysis on each type. Selecting some feature point candidates, a practical parameter learning method is provided to determine the threshold set, along with the implementation of an integrated algorithm by merging calculation steps. Because all the coefficients are established based on distances among the points or their ratio, RIDED-2D is inherently invariant to translation and rotation transformations. Furthermore, a forbidden region approach is proposed to eliminate interference of the mixed pixels. Key performances of RIDED-2D are evaluated in detail by including computational complexity, time expenditure, accuracy and stability. The results indicate that RIDED-2D can detect edge points accurately from several real range images, in which large noises and systematic noises are involved, and the total processing time is less than 0.1 millisecond on an ordinary PC platform using the integrated algorithm. Comparing with other state-of-the-art edge detection methods qualitatively, RIDED-2D exhibits a prominent advantage on computational efficiency. Thus, the proposed method qualifies for real-time processing in stringent industrial applications. Besides, another contribution of this paper is to introduce CPU clock counting technique to evaluate the performance of the proposed algorithm, and suggest a convenient and objective way to estimate the algorithm's time expenditure in other platforms.


Author(s):  
K. Nagara ◽  
T. Fuse

With increasing widespread use of three-dimensional data, the demand for simplified data acquisition is also increasing. The range camera, which is a simplified sensor, can acquire a dense-range image in a single shot; however, its measuring coverage is narrow and its measuring accuracy is limited. The former drawback had be overcome by registering sequential range images. This method, however, assumes that the point cloud is error-free. In this paper, we develop an integration method for sequential range images with error adjustment of the point cloud. The proposed method consists of ICP (Iterative Closest Point) algorithm and self-calibration bundle adjustment. The ICP algorithm is considered an initial specification for the bundle adjustment. By applying the bundle adjustment, coordinates of the point cloud are modified and the camera poses are updated. Through experimentation on real data, the efficiency of the proposed method has been confirmed.


Author(s):  
H. Zhao ◽  
D. Acharya ◽  
M. Tomko ◽  
K. Khoshelham

Abstract. Indoor localization, navigation and mapping systems highly rely on the initial sensor pose information to achieve a high accuracy. Most existing indoor mapping and navigation systems cannot initialize the sensor poses automatically and consequently these systems cannot perform relocalization and recover from a pose estimation failure. For most indoor environments, a map or a 3D model is often available, and can provide useful information for relocalization. This paper presents a novel relocalization method for lidar sensors in indoor environments to estimate the initial lidar pose using a CNN pose regression network trained using a 3D model. A set of synthetic lidar frames are generated from the 3D model with known poses. Each lidar range image is a one-channel range image, used to train the CNN pose regression network from scratch to predict the initial sensor location and orientation. The CNN regression network trained by synthetic range images is used to estimate the poses of the lidar using real range images captured in the indoor environment. The results show that the proposed CNN regression network can learn from synthetic lidar data and estimate the pose of real lidar data with an accuracy of 1.9 m and 8.7 degrees.


Sign in / Sign up

Export Citation Format

Share Document