scholarly journals DEVELOPMENT OF INTEGRATION AND ADJUSTMENT METHOD FOR SEQUENTIAL RANGE IMAGES

Author(s):  
K. Nagara ◽  
T. Fuse

With increasing widespread use of three-dimensional data, the demand for simplified data acquisition is also increasing. The range camera, which is a simplified sensor, can acquire a dense-range image in a single shot; however, its measuring coverage is narrow and its measuring accuracy is limited. The former drawback had be overcome by registering sequential range images. This method, however, assumes that the point cloud is error-free. In this paper, we develop an integration method for sequential range images with error adjustment of the point cloud. The proposed method consists of ICP (Iterative Closest Point) algorithm and self-calibration bundle adjustment. The ICP algorithm is considered an initial specification for the bundle adjustment. By applying the bundle adjustment, coordinates of the point cloud are modified and the camera poses are updated. Through experimentation on real data, the efficiency of the proposed method has been confirmed.

2020 ◽  
Vol 12 (8) ◽  
pp. 1240 ◽  
Author(s):  
Xabier Blanch ◽  
Antonio Abellan ◽  
Marta Guinau

The emerging use of photogrammetric point clouds in three-dimensional (3D) monitoring processes has revealed some constraints with respect to the use of LiDAR point clouds. Oftentimes, point clouds (PC) obtained by time-lapse photogrammetry have lower density and precision, especially when Ground Control Points (GCPs) are not available or the camera system cannot be properly calibrated. This paper presents a new workflow called Point Cloud Stacking (PCStacking) that overcomes these restrictions by making the most of the iterative solutions in both camera position estimation and internal calibration parameters that are obtained during bundle adjustment. The basic principle of the stacking algorithm is straightforward: it computes the median of the Z coordinates of each point for multiple photogrammetric models to give a resulting PC with a greater precision than any of the individual PC. The different models are reconstructed from images taken simultaneously from, at least, five points of view, reducing the systematic errors associated with the photogrammetric reconstruction workflow. The algorithm was tested using both a synthetic point cloud and a real 3D dataset from a rock cliff. The synthetic data were created using mathematical functions that attempt to emulate the photogrammetric models. Real data were obtained by very low-cost photogrammetric systems specially developed for this experiment. Resulting point clouds were improved when applying the algorithm in synthetic and real experiments, e.g., 25th and 75th error percentiles were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3949 ◽  
Author(s):  
Wei Li ◽  
Mingli Dong ◽  
Naiguang Lu ◽  
Xiaoping Lou ◽  
Peng Sun

An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.


Author(s):  
SUCHENDRA M. BHANDARKAR

A surface feature hypergraph (SFAHG) representation is proposed for the recognition and localization of three-dimensional objects. The hypergraph representation is shown to be viewpoint independent thus resulting in substantial savings in terms of memory for the object model database. The resulting hypergraph matching algorithm integrates both, relational and the rigid pose constraint in a consistent unified manner. The matching algorithm is also shown to have a polynomial order of complexity even in multiple-object scenes with instances of objects partially occluding each other. An algorithm for incrementally constructing the hypergraph representation of an object model from range images of the object taken from different viewpoints is also presented. The hypergraph matching and the hypergraph construction algorithms are shown to be capable of correcting errors in the initial segmentation of the range image. The hypergraph construction algorithm and the matching algorithm are tested on range images of scenes containing multiple three-dimensional objects with partial occlusion.


Author(s):  
M. Shahbazi ◽  
G. Sohn ◽  
J. Théau ◽  
P. Ménard

Along with the advancement of unmanned aerial vehicles (UAVs), improvement of high-resolution cameras and development of vision-based mapping techniques, unmanned aerial imagery has become a matter of remarkable interest among researchers and industries. These images have the potential to provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modelling. In this paper, we present our theoretical and technical experiments regarding the development, implementation and evaluation of a UAV-based photogrammetric system for precise 3D modelling. This system was preliminarily evaluated for the application of gravel-pit surveying. The hardware of the system includes an electric powered helicopter, a 16-megapixels visible camera and inertial navigation system. The software of the system consists of the in-house programs built for sensor calibration, platform calibration, system integration and flight planning. It also includes the algorithms developed for structure from motion (SfM) computation including sparse matching, motion estimation, bundle adjustment and dense matching.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7024
Author(s):  
Marcos Alonso ◽  
Daniel Maestro ◽  
Alberto Izaguirre ◽  
Imanol Andonegui ◽  
Manuel Graña

Surface flatness assessment is necessary for quality control of metal sheets manufactured from steel coils by roll leveling and cutting. Mechanical-contact-based flatness sensors are being replaced by modern laser-based optical sensors that deliver accurate and dense reconstruction of metal sheet surfaces for flatness index computation. However, the surface range images captured by these optical sensors are corrupted by very specific kinds of noise due to vibrations caused by mechanical processes like degreasing, cleaning, polishing, shearing, and transporting roll systems. Therefore, high-quality flatness optical measurement systems strongly depend on the quality of image denoising methods applied to extract the true surface height image. This paper presents a deep learning architecture for removing these specific kinds of noise from the range images obtained by a laser based range sensor installed in a rolling and shearing line, in order to allow accurate flatness measurements from the clean range images. The proposed convolutional blind residual denoising network (CBRDNet) is composed of a noise estimation module and a noise removal module implemented by specific adaptation of semantic convolutional neural networks. The CBRDNet is validated on both synthetic and real noisy range image data that exhibit the most critical kinds of noise that arise throughout the metal sheet production process. Real data were obtained from a single laser line triangulation flatness sensor installed in a roll leveling and cut to length line. Computational experiments over both synthetic and real datasets clearly demonstrate that CBRDNet achieves superior performance in comparison to traditional 1D and 2D filtering methods, and state-of-the-art CNN-based denoising techniques. The experimental validation results show a reduction in error than can be up to 15% relative to solutions based on traditional 1D and 2D filtering methods and between 10% and 3% relative to the other deep learning denoising architectures recently reported in the literature.


2003 ◽  
Vol 36 (6) ◽  
pp. 1475-1479 ◽  
Author(s):  
J. Peters

The integration of the three-dimensional profile of each node of the reciprocal lattice without ana priorimodelling of the shape of the reflections is a prerequisite in order to improve the capability of area detectors in diffraction studies. Bolotovskyet al.[J. Appl. Cryst.(1995),28, 86–95] published a new method of area-detector peak integration based on a statistical analysis of pixel intensities and suggested its generalization for processing of high-resolution three-dimensional electronic detector data. This has been done in the present work, respecting the special requirements of data collected from neutron diffraction. The results are compared with other integration methods. It is shown that the seed-skewness method is successful in giving very reliable results and simultaneously optimizes the standard deviations. The integration procedures are applied to real data, which are refined and compared with benchmark results.


Author(s):  
M. Bassier ◽  
M. Vergauwen ◽  
B. Van Genechten

Semantically rich three dimensional models such as Building Information Models (BIMs) are increasingly used in digital heritage. They provide the required information to varying stakeholders during the different stages of the historic buildings life cyle which is crucial in the conservation process. The creation of as-built BIM models is based on point cloud data. However, manually interpreting this data is labour intensive and often leads to misinterpretations. By automatically classifying the point cloud, the information can be proccesed more effeciently. A key aspect in this automated scan-to-BIM process is the classification of building objects.<br><br> In this research we look to automatically recognise elements in existing buildings to create compact semantic information models. Our algorithm efficiently extracts the main structural components such as floors, ceilings, roofs, walls and beams despite the presence of significant clutter and occlusions. More specifically, Support Vector Machines (SVM) are proposed for the classification. The algorithm is evaluated using real data of a variety of existing buildings. The results prove that the used classifier recognizes the objects with both high precision and recall. As a result, entire data sets are reliably labelled at once. The approach enables experts to better document and process heritage assets.


Author(s):  
P. Biasutti ◽  
J.-F. Aujol ◽  
M. Brédif ◽  
A. Bugeau

This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor’s topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.


Sign in / Sign up

Export Citation Format

Share Document