scholarly journals A Novel Distribution for Representation of 6D Pose Uncertainty

Micromachines ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 126
Author(s):  
Lei Zhang ◽  
Huiliang Shang ◽  
Yandan Lin

The 6D Pose estimation is a crux in many applications, such as visual perception, autonomous navigation, and spacecraft motion. For robotic grasping, the cluttered and self-occlusion scenarios bring new challenges to the this field. Currently, society uses CNNs to solve this problem. The CNN models will suffer high uncertainty caused by the environmental factors and the object itself. These models usually maintain a Gaussian distribution, which is not suitable for the underlying manifold structure of the pose. Many works decouple rotation from the translation and quantify rotational uncertainty. Only a few works pay attention to the uncertainty of the 6D pose. This work proposes a distribution that can capture the uncertainty of the 6D pose parameterized by the dual quaternions, meanwhile, the proposed distribution takes the periodic nature of the underlying structure into account. The presented results include the normalization constant computation and parameter estimation techniques of the distribution. This work shows the benefits of the proposed distribution, which provides a more realistic explanation for the uncertainty in the 6D pose and eliminates the drawback inherited from the planar rigid motion.

2018 ◽  
Vol 72 (3) ◽  
pp. 649-668
Author(s):  
Yang Tian ◽  
Meng Yu ◽  
Meibao Yao ◽  
Xiangyu Huang

In this paper, a novel method for autonomous navigation for an extra-terrestrial body landing mission is proposed. Based on state-of-the-art crater detection and matching algorithms, a crater edge-based navigation method is formulated, in which solar illumination direction is adopted as a complementary optical cue to aid crater edge-based navigation when only one crater is available. To improve the pose estimation accuracy, a distributed Extended Kalman Filter (EKF) is developed to encapsulate the crater edge-based estimation approach. Finally, the effectiveness of proposed approach is validated by Monte Carlo simulations using a specifically designed planetary landing simulation toolbox.


2021 ◽  
Vol 102 (4) ◽  
Author(s):  
Chenhao Yang ◽  
Yuyi Liu ◽  
Andreas Zell

AbstractLearning-based visual localization has become prospective over the past decades. Since ground truth pose labels are difficult to obtain, recent methods try to learn pose estimation networks using pixel-perfect synthetic data. However, this also introduces the problem of domain bias. In this paper, we first build a Tuebingen Buildings dataset of RGB images collected by a drone in urban scenes and create a 3D model for each scene. A large number of synthetic images are generated based on these 3D models. We take advantage of image style transfer and cycle-consistent adversarial training to predict the relative camera poses of image pairs based on training over synthetic environment data. We propose a relative camera pose estimation approach to solve the continuous localization problem for autonomous navigation of unmanned systems. Unlike those existing learning-based camera pose estimation methods that train and test in a single scene, our approach successfully estimates the relative camera poses of multiple city locations with a single trained model. We use the Tuebingen Buildings and the Cambridge Landmarks datasets to evaluate the performance of our approach in a single scene and across-scenes. For each dataset, we compare the performance between real images and synthetic images trained models. We also test our model in the indoor dataset 7Scenes to demonstrate its generalization ability.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6002 ◽  
Author(s):  
Daniele De Martini ◽  
Matthew Gadd ◽  
Paul Newman

This paper presents a novel two-stage system which integrates topological localisation candidates from a radar-only place recognition system with precise pose estimation using spectral landmark-based techniques. We prove that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade. Offline experiments are conducted on the most extensive radar-focused urban autonomy dataset available to the community with performance comparing favourably with and even rivalling alternative state-of-the-art radar localisation systems. Specifically, we show the long-term durability of the approach and of the sensing technology itself to autonomous navigation. We suggest a range of sensible methods of tuning the system, all of which are suitable for online operation. For both tuning regimes, we achieve, over the course of a month of localisation trials against a single static map, high recalls at high precision, and much reduced variance in erroneous metric pose estimation. As such, this work is a necessary first step towards a radar teach-and-repeat (RTR) system and the enablement of autonomy across extreme changes in appearance or inclement conditions.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6741
Author(s):  
Jorge De León ◽  
Raúl Cebolla ◽  
Antonio Barrientos

In this work the authors present a novel algorithm for estimating the odometry of “C” legged robots with compliant legs and an analysis to estimate the pose of the robot. Robots with “C” legs are an alternative to wheeled and tracked robots for overcoming obstacles that can be found in different scenarios like stairs, debris, etc. Therefore, this kind of robot has become very popular for its locomotion capabilities, but at this point these robots do not have developed algorithms to implement autonomous navigation. With that objective in mind, the authors present a novel algorithm using the encoders of the legs to improve the estimation of the robot localization together with other sensors. Odometry is necessary for using some algorithms like the Extended Kalman Filter, which is used for some autonomous navigation algorithms. Due to the flexible properties of the “C” legs and the localization of the rotational axis, obtaining the displacement at every step is not as trivial as in a wheeled robot; to solve those complexities, the algorithm presented in this work makes a linear approximation of the leg compressed instead of calculating in each iteration the mechanics of the leg using finite element analysis, so the calculus level is reduced. Furthermore, the algorithm was tested in simulations and with a real robot. The results obtained in the tests are promising and together with the algorithm and fusion sensor can be used to endow the robots with autonomous navigation.


Author(s):  
Joachim Frank

Compared with images of negatively stained single particle specimens, those obtained by cryo-electron microscopy have the following new features: (a) higher “signal” variability due to a higher variability of particle orientation; (b) reduced signal/noise ratio (S/N); (c) virtual absence of low-spatial-frequency information related to elastic scattering, due to the properties of the phase contrast transfer function (PCTF); and (d) reduced resolution due to the efforts of the microscopist to boost the PCTF at low spatial frequencies, in his attempt to obtain recognizable particle images.


1988 ◽  
Vol 31 (2) ◽  
pp. 156-165 ◽  
Author(s):  
P. A. Busby ◽  
Y. C. Tong ◽  
G. M. Clark

The identification of consonants in a/-C-/a/nonsense syllables, using a fourteen-alternative forced-choice procedure, was examined in 4 profoundly hearing-impaired children under five conditions: audition alone using hearing aids in free-field (A),vision alone (V), auditory-visual using hearing aids in free-field (AV1), auditory-visual with linear amplification (AV2), and auditory-visual with syllabic compression (AV3). In the AV2 and AV3 conditions, acoustic signals were binaurally presented by magnetic or acoustic coupling to the subjects' hearing aids. The syllabic compressor had a compression ratio of 10:1, and attack and release times were 1.2 ms and 60 ms. The confusion matrices were subjected to two analysis methods: hierarchical clustering and information transmission analysis using articulatory features. The same general conclusions were drawn on the basis of results obtained from either analysis method. The results indicated better performance in the V condition than in the A condition. In the three AV conditions, the subjects predominately combined the acoustic parameter of voicing with the visual signal. No consistent differences were recorded across the three AV conditions. Syllabic compression did not, therefore, appear to have a significant influence on AV perception for these children. A high degree of subject variability was recorded for the A and three AV conditions, but not for the V condition.


Sign in / Sign up

Export Citation Format

Share Document