Pose estimation of a fast tumbling space noncooperative target using the time-of-flight camera

Author(s):  
Qi-shuai Wang ◽  
Guo-ping Cai

This article proposes a pose estimation method for a fast tumbling space noncooperative target. The core idea of this method is to extract the target’s body-fixed coordinate system by using the geometric characteristics of the target’s point cloud and then by the body-fixed coordinate system to realize pose initialization and pose tracking of the target. In the extraction of the body-fixed coordinate system, a point cloud of the target, which can be obtained by a time-of-flight camera, can be divided into small plane point clouds firstly; then the geometric information of these plane point clouds can be utilized to extract the target’s descriptive structures, such as the target surfaces and the solar panel supports; and finally the body-fixed coordinate system can be determined by the geometric characteristics of these structures. The body-fixed coordinate system obtained above can be used to determine the pose of consecutive point clouds of the target, that is, to realize the pose initialization and the pose tracking, and accumulated bias often emerges in the pose tracking. To mitigate the accumulated bias, a pose graph optimization method is adopted. In the end of this article, the performance of the proposed method is evaluated by numerical simulations. Simulation results show that when the distance between the target and the chaser is 10 m, the errors of the estimation results of the target’s attitude and position are 0.025° and 0.026 m, respectively. This means that the proposed method can achieve high-precision pose estimation of the noncooperative target.

1973 ◽  
Vol 28 (2) ◽  
pp. 206-215
Author(s):  
Hanns Ruder

Basic in the treatment of collective rotations is the definition of a body-fixed coordinate system. A kinematical method is derived to obtain the Hamiltonian of a n-body problem for a given definition of the body-fixed system. From this exact Hamiltonian, a consequent perturbation expansion in terms of the total angular momentum leads to two exact expressions: one for the collective rotational energy which has to be added to the groundstate energy in this order of perturbation and a second one for the effective inertia tensor in the groundstate. The discussion of these results leads to two criteria how to define the best body-fixed coordinate system, namely a differential equation and a variational principle. The equivalence of both is shown.


2021 ◽  
Author(s):  
Lun H. Mark

This thesis investigates how geometry of complex objects is related to LIDAR scanning with the Iterative Closest Point (ICP) pose estimation and provides statistical means to assess the pose accuracy. LIDAR scanners have become essential parts of space vision systems for autonomous docking and rendezvous. Principal Componenet Analysis based geometric constraint indices have been found to be strongly related to the pose error norm and the error of each individual degree of freedom. This leads to the development of several strategies for identifying the best view of an object and the optimal combination of localized scanned areas of the object's surface to achieve accurate pose estimation. Also investigated is the possible relation between the ICP pose estimation accuracy and the districution or allocation of the point cloud. The simulation results were validated using point clouds generated by scanning models of Quicksat and a cuboctahedron using Neptec's TriDAR scanner.


2021 ◽  
Author(s):  
Lun H. Mark

This thesis investigates how geometry of complex objects is related to LIDAR scanning with the Iterative Closest Point (ICP) pose estimation and provides statistical means to assess the pose accuracy. LIDAR scanners have become essential parts of space vision systems for autonomous docking and rendezvous. Principal Componenet Analysis based geometric constraint indices have been found to be strongly related to the pose error norm and the error of each individual degree of freedom. This leads to the development of several strategies for identifying the best view of an object and the optimal combination of localized scanned areas of the object's surface to achieve accurate pose estimation. Also investigated is the possible relation between the ICP pose estimation accuracy and the districution or allocation of the point cloud. The simulation results were validated using point clouds generated by scanning models of Quicksat and a cuboctahedron using Neptec's TriDAR scanner.


2019 ◽  
Vol 9 (16) ◽  
pp. 3273 ◽  
Author(s):  
Wen-Chung Chang ◽  
Van-Toan Pham

This paper develops a registration architecture for the purpose of estimating relative pose including the rotation and the translation of an object in terms of a model in 3-D space based on 3-D point clouds captured by a 3-D camera. Particularly, this paper addresses the time-consuming problem of 3-D point cloud registration which is essential for the closed-loop industrial automated assembly systems that demand fixed time for accurate pose estimation. Firstly, two different descriptors are developed in order to extract coarse and detailed features of these point cloud data sets for the purpose of creating training data sets according to diversified orientations. Secondly, in order to guarantee fast pose estimation in fixed time, a seemingly novel registration architecture by employing two consecutive convolutional neural network (CNN) models is proposed. After training, the proposed CNN architecture can estimate the rotation between the model point cloud and a data point cloud, followed by the translation estimation based on computing average values. By covering a smaller range of uncertainty of the orientation compared with a full range of uncertainty covered by the first CNN model, the second CNN model can precisely estimate the orientation of the 3-D point cloud. Finally, the performance of the algorithm proposed in this paper has been validated by experiments in comparison with baseline methods. Based on these results, the proposed algorithm significantly reduces the estimation time while maintaining high precision.


2021 ◽  
Vol 13 (21) ◽  
pp. 4239
Author(s):  
Jie Li ◽  
Yiqi Zhuang ◽  
Qi Peng ◽  
Liang Zhao

On-orbit space technology is used for tasks such as the relative navigation of non-cooperative targets, rendezvous and docking, on-orbit assembly, and space debris removal. In particular, the pose estimation of space non-cooperative targets is a prerequisite for studying these applications. The capabilities of a single sensor are limited, making it difficult to achieve high accuracy in the measurement range. Against this backdrop, a non-cooperative target pose measurement system fused with multi-source sensors was designed in this study. First, a cross-source point cloud fusion algorithm was developed. This algorithm uses the unified and simplified expression of geometric elements in conformal geometry algebra, breaks the traditional point-to-point correspondence, and constructs matching relationships between points and spheres. Next, for the fused point cloud, we proposed a plane clustering-method-based CGA to eliminate point cloud diffusion and then reconstruct the 3D contour model. Finally, we used a twistor along with the Clohessy–Wiltshire equation to obtain the posture and other motion parameters of the non-cooperative target through the unscented Kalman filter. In both the numerical simulations and the semi-physical experiments, the proposed measurement system met the requirements for non-cooperative target measurement accuracy, and the estimation error of the angle of the rotating spindle was 30% lower than that of other, previously studied methods. The proposed cross-source point cloud fusion algorithm can achieve high registration accuracy for point clouds with different densities and small overlap rates.


Author(s):  
M. Bassier ◽  
M. Vergauwen

<p><strong>Abstract.</strong> The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is retrieving the proper observations for each object. After segmenting and classifying the initial point cloud, the labeled segments should be clustered according to their respective objects. However, this procedure is challenging due to noise, occlusions and the associativity between different objects. This is especially important for wall geometry as it forms the basis for further BIM reconstruction.</p><p> In this work, a method is presented to automatically group wall segments derived from point clouds according to the proper walls of a building. More specifically, a Conditional Random Field is employed that evaluates the context of each observation in order to determine which wall it belongs too. The emphasis is on the clustering of highly associative walls as this topic currently is a gap in the body of knowledge. First a set of classified planar primitives is obtained using algorithms developed in prior work. Next, both local and contextual features are extracted based on the nearest neighbors and a number of seeds that are heuristically determined. The final wall clusters are then computed by decoding the graph and thus the most likely configuration of the observations. The experiments prove that the used method is a promising framework for wall clustering from unstructured point cloud data. Compared to a conventional region growing method, the proposed method significantly reduces the rate of false positives, resulting in better wall clusters. A key advantage of the proposed method is its capability of dealing with complex wall geometry in entire buildings opposed to the presented methods in current literature.</p>


2020 ◽  
Vol 10 (21) ◽  
pp. 7535
Author(s):  
Marta Nowak ◽  
Robert Sitnik

In this article, we present a method of analysis for 3D scanning sequences of human bodies in motion that allows us to obtain a computer animation of a virtual character containing both skeleton motion and high-detail deformations of the body surface geometry, resulting from muscle activity, the dynamics of the motion, and tissue inertia. The developed algorithm operates on a sequence of 3D scans with high spatial and temporal resolution. The presented method can be applied to scans in the form of both triangle meshes and 3D point clouds. One of the contributions of this work is the use of the Iterative Closest Point algorithm with motion constraints for pose tracking, which has been problematic so far. We also introduce shape maps as a tool to represent local body segment deformations. An important feature of our method is the possibility to change the topology and resolution of the output mesh and the topology of the animation skeleton in individual sequences, without requiring time-consuming retraining of the model. Compared to the state-of-the-art Skinned Multi-Person Linear (SMPL) method, the proposed algorithm yields almost twofold better accuracy in shape mapping.


2021 ◽  
Vol 7 (5) ◽  
pp. 80
Author(s):  
Ahmet Firintepe ◽  
Carolin Vey ◽  
Stylianos Asteriadis ◽  
Alain Pagani ◽  
Didier Stricker

In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach “PointsToRotation” is based on a Deep Neural Network alone, whereas our second approach “PointsToPose” is a hybrid model combining Deep Learning and a voting-based mechanism. Our methods utilize a point cloud estimator, which we trained on multi-view infrared images in a semi-supervised manner, generating point clouds based on one image only. We generate a point cloud dataset with our point cloud estimator using the HMDPose dataset, consisting of multi-view infrared images of various AR glasses with the corresponding 6-DoF poses. In comparison to another point cloud-based 6-DoF pose estimation named CloudPose, we achieve an error reduction of around 50%. Compared to a state-of-the-art image-based method, we reduce the pose estimation error by around 96%.


2021 ◽  
Vol 11 (4) ◽  
pp. 1465
Author(s):  
Rocio Mora ◽  
Jose Antonio Martín-Jiménez ◽  
Susana Lagüela ◽  
Diego González-Aguilera

Total and automatic digitalization of indoor spaces in 3D implies a great advance in building maintenance and construction tasks, which currently require visits and manual works. Terrestrial laser scanners (TLS) have been widely used for these tasks, although the acquisition methodology with TLS systems is time consuming, and each point cloud is acquired in a different coordinate system, so the user has to post-process the data to clean and get a unique point cloud of the whole scenario. This paper presents a solution for the automatic data acquisition and registration of point clouds from indoor scenes, designed for point clouds acquired with a terrestrial laser scanner (TLS) mounted on an unmanned ground vehicle (UGV). The methodology developed allows the generation of one complete dense 3D point cloud consisting of the acquired point clouds registered in the same coordinate system, reaching an accuracy below 1 cm in section dimensions and below 1.5 cm in walls thickness, which makes it valid for quality control in building works. Two different study cases corresponding to building works were chosen for the validation of the method, showing the applicability of the methodology developed for tasks related to the control of the evolution of the construction.


Author(s):  
K. Oda ◽  
S. Hattori ◽  
H. Saeki ◽  
T. Takayama ◽  
R. Honma

This paper proposes a qualification method of a point cloud created by SfM (Structure-from-Motion) software. Recently, SfM software is popular for creating point clouds. Point clouds created by SfM Software seems to be correct, but in many cases, the result does not have correct scale, or does not have correct coordinates in reference coordinate system, and in these cases it is hard to evaluate the quality of the point clouds. To evaluate this correctness of the point clouds, we propose to use the difference between point clouds with different source of images. If the shape of the point clouds with different source of images is correct, two shapes of different source might be almost same. To compare the two or more shapes of point cloud, iterative-closest-point (ICP) is implemented. Transformation parameters (rotation and translation) are iteratively calculated so as to minimize sum of squares of distances. This paper describes the procedure of the evaluation and some test results.


Sign in / Sign up

Export Citation Format

Share Document