scholarly journals Human-Robot Integration for Pose Estimation and Semi-Autonomous Navigation on Unstructured Construction Sites

Author(s):  
Chen Feng ◽  
Nicholas Fredricks ◽  
Vineet R. Kamat
2018 ◽  
Vol 72 (3) ◽  
pp. 649-668
Author(s):  
Yang Tian ◽  
Meng Yu ◽  
Meibao Yao ◽  
Xiangyu Huang

In this paper, a novel method for autonomous navigation for an extra-terrestrial body landing mission is proposed. Based on state-of-the-art crater detection and matching algorithms, a crater edge-based navigation method is formulated, in which solar illumination direction is adopted as a complementary optical cue to aid crater edge-based navigation when only one crater is available. To improve the pose estimation accuracy, a distributed Extended Kalman Filter (EKF) is developed to encapsulate the crater edge-based estimation approach. Finally, the effectiveness of proposed approach is validated by Monte Carlo simulations using a specifically designed planetary landing simulation toolbox.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4478
Author(s):  
Jiangying Zhao ◽  
Yongbiao Hu ◽  
Mingrui Tian

Excavation is one of the broadest activities in the construction industry, often affected by safety and productivity. To address these problems, it is necessary for construction sites to automatically monitor the poses of excavator manipulators in real time. Based on computer vision (CV) technology, an approach, through a monocular camera and marker, was proposed to estimate the pose parameters (including orientation and position) of the excavator manipulator. To simulate the pose estimation process, a measurement system was established with a common camera and marker. Through comprehensive experiments and error analysis, this approach showed that the maximum detectable depth of the system is greater than 11 m, the orientation error is less than 8.5°, and the position error is less than 22 mm. A prototype of the system that proved the feasibility of the proposed method was tested. Furthermore, this study provides an alternative CV technology for monitoring construction machines.


2021 ◽  
Vol 102 (4) ◽  
Author(s):  
Chenhao Yang ◽  
Yuyi Liu ◽  
Andreas Zell

AbstractLearning-based visual localization has become prospective over the past decades. Since ground truth pose labels are difficult to obtain, recent methods try to learn pose estimation networks using pixel-perfect synthetic data. However, this also introduces the problem of domain bias. In this paper, we first build a Tuebingen Buildings dataset of RGB images collected by a drone in urban scenes and create a 3D model for each scene. A large number of synthetic images are generated based on these 3D models. We take advantage of image style transfer and cycle-consistent adversarial training to predict the relative camera poses of image pairs based on training over synthetic environment data. We propose a relative camera pose estimation approach to solve the continuous localization problem for autonomous navigation of unmanned systems. Unlike those existing learning-based camera pose estimation methods that train and test in a single scene, our approach successfully estimates the relative camera poses of multiple city locations with a single trained model. We use the Tuebingen Buildings and the Cambridge Landmarks datasets to evaluate the performance of our approach in a single scene and across-scenes. For each dataset, we compare the performance between real images and synthetic images trained models. We also test our model in the indoor dataset 7Scenes to demonstrate its generalization ability.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6002 ◽  
Author(s):  
Daniele De Martini ◽  
Matthew Gadd ◽  
Paul Newman

This paper presents a novel two-stage system which integrates topological localisation candidates from a radar-only place recognition system with precise pose estimation using spectral landmark-based techniques. We prove that the—recently available—seminal radar place recognition (RPR) and scan matching sub-systems are complementary in a style reminiscent of the mapping and localisation systems underpinning visual teach-and-repeat (VTR) systems which have been exhibited robustly in the last decade. Offline experiments are conducted on the most extensive radar-focused urban autonomy dataset available to the community with performance comparing favourably with and even rivalling alternative state-of-the-art radar localisation systems. Specifically, we show the long-term durability of the approach and of the sensing technology itself to autonomous navigation. We suggest a range of sensible methods of tuning the system, all of which are suitable for online operation. For both tuning regimes, we achieve, over the course of a month of localisation trials against a single static map, high recalls at high precision, and much reduced variance in erroneous metric pose estimation. As such, this work is a necessary first step towards a radar teach-and-repeat (RTR) system and the enablement of autonomy across extreme changes in appearance or inclement conditions.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6741
Author(s):  
Jorge De León ◽  
Raúl Cebolla ◽  
Antonio Barrientos

In this work the authors present a novel algorithm for estimating the odometry of “C” legged robots with compliant legs and an analysis to estimate the pose of the robot. Robots with “C” legs are an alternative to wheeled and tracked robots for overcoming obstacles that can be found in different scenarios like stairs, debris, etc. Therefore, this kind of robot has become very popular for its locomotion capabilities, but at this point these robots do not have developed algorithms to implement autonomous navigation. With that objective in mind, the authors present a novel algorithm using the encoders of the legs to improve the estimation of the robot localization together with other sensors. Odometry is necessary for using some algorithms like the Extended Kalman Filter, which is used for some autonomous navigation algorithms. Due to the flexible properties of the “C” legs and the localization of the rotational axis, obtaining the displacement at every step is not as trivial as in a wheeled robot; to solve those complexities, the algorithm presented in this work makes a linear approximation of the leg compressed instead of calculating in each iteration the mechanics of the leg using finite element analysis, so the calculus level is reduced. Furthermore, the algorithm was tested in simulations and with a real robot. The results obtained in the tests are promising and together with the algorithm and fusion sensor can be used to endow the robots with autonomous navigation.


Micromachines ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 126
Author(s):  
Lei Zhang ◽  
Huiliang Shang ◽  
Yandan Lin

The 6D Pose estimation is a crux in many applications, such as visual perception, autonomous navigation, and spacecraft motion. For robotic grasping, the cluttered and self-occlusion scenarios bring new challenges to the this field. Currently, society uses CNNs to solve this problem. The CNN models will suffer high uncertainty caused by the environmental factors and the object itself. These models usually maintain a Gaussian distribution, which is not suitable for the underlying manifold structure of the pose. Many works decouple rotation from the translation and quantify rotational uncertainty. Only a few works pay attention to the uncertainty of the 6D pose. This work proposes a distribution that can capture the uncertainty of the 6D pose parameterized by the dual quaternions, meanwhile, the proposed distribution takes the periodic nature of the underlying structure into account. The presented results include the normalization constant computation and parameter estimation techniques of the distribution. This work shows the benefits of the proposed distribution, which provides a more realistic explanation for the uncertainty in the 6D pose and eliminates the drawback inherited from the planar rigid motion.


Sign in / Sign up

Export Citation Format

Share Document