Decoupling Relative Pose Estimation Method for Non-Overlapping Multi-Camera System

2021 ◽  
Vol 41 (5) ◽  
pp. 0515001
Author(s):  
田苗 Tian Miao ◽  
关棒磊 Guan Banglei ◽  
孙放 Sun Fang ◽  
苑云 Yuan Yun ◽  
于起峰 Yu Qifeng
2018 ◽  
Vol 47 (6) ◽  
pp. 612002
Author(s):  
薛俊诗 XUE Jun-shi ◽  
舒奇泉 SHU Qi-quan ◽  
郭宁博 GUO Ning-bo

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4366 ◽  
Author(s):  
Francisco Molina Martel ◽  
Juri Sidorenko ◽  
Christoph Bodensteiner ◽  
Michael Arens ◽  
Urs Hugentobler

In this work we introduce a relative localization method that estimates the coordinate frame transformation between two devices based on distance measurements. We present a linear algorithm that calculates the relative pose in 2D or 3D with four degrees of freedom (4-DOF). This algorithm needs a minimum of five or six distance measurements, respectively, to estimate the relative pose uniquely. We use the linear algorithm in conjunction with outlier detection algorithms and as a good initial estimate for iterative least squares refinement. The proposed method outperforms other related linear methods in terms of distance measurements needed and in terms of accuracy. In comparison with a related linear algorithm in 2D, we can reduce 10% of the translation error. In contrast to the more general 6-DOF linear algorithm, our 4-DOF method reduces the minimum distances needed from ten to six and the rotation error by a factor of four at the standard deviation of our ultra-wideband (UWB) transponders. When using the same amount of measurements the orientation error and translation error are approximately reduced to a factor of ten. We validate our method with simulations and an experimental setup, where we integrate ultra-wideband (UWB) technology into simultaneous localization and mapping (SLAM)-based devices. The presented relative pose estimation method is intended for use in augmented reality applications for cooperative localization with head-mounted displays. We foresee practical use cases of this method in cooperative SLAM, where map merging is performed in the most proactive manner.


2019 ◽  
Vol 40 (4) ◽  
pp. 535-541
Author(s):  
WANG Jun ◽  
XU Xiaofeng ◽  
DONG Mingli ◽  
SUN Peng ◽  
CHEN Min

2021 ◽  
Vol 11 (9) ◽  
pp. 4241
Author(s):  
Jiahua Wu ◽  
Hyo Jong Lee

In bottom-up multi-person pose estimation, grouping joint candidates into the appropriately structured corresponding instance of a person is challenging. In this paper, a new bottom-up method, the Partitioned CenterPose (PCP) Network, is proposed to better cluster the detected joints. To achieve this goal, we propose a novel approach called Partition Pose Representation (PPR) which integrates the instance of a person and its body joints based on joint offset. PPR leverages information about the center of the human body and the offsets between that center point and the positions of the body’s joints to encode human poses accurately. To enhance the relationships between body joints, we divide the human body into five parts, and then, we generate a sub-PPR for each part. Based on this PPR, the PCP Network can detect people and their body joints simultaneously, then group all body joints according to joint offset. Moreover, an improved l1 loss is designed to more accurately measure joint offset. Using the COCO keypoints and CrowdPose datasets for testing, it was found that the performance of the proposed method is on par with that of existing state-of-the-art bottom-up methods in terms of accuracy and speed.


Measurement ◽  
2022 ◽  
Vol 187 ◽  
pp. 110274
Author(s):  
Zhang Zimiao ◽  
Xu kai ◽  
Wu Yanan ◽  
Zhang Shihai

Sign in / Sign up

Export Citation Format

Share Document