Heterogeneous Multisensor Fusion for Mobile Platform Three-Dimensional Pose Estimation

Author(s):  
Hanieh Deilamsalehy ◽  
Timothy C. Havens ◽  
Joshua Manela

Precise, robust, and consistent localization is an important subject in many areas of science such as vision-based control, path planning, and simultaneous localization and mapping (SLAM). To estimate the pose of a platform, sensors such as inertial measurement units (IMUs), global positioning system (GPS), and cameras are commonly employed. Each of these sensors has their strengths and weaknesses. Sensor fusion is a known approach that combines the data measured by different sensors to achieve a more accurate or complete pose estimation and to cope with sensor outages. In this paper, a three-dimensional (3D) pose estimation algorithm is presented for a unmanned aerial vehicle (UAV) in an unknown GPS-denied environment. A UAV can be fully localized by three position coordinates and three orientation angles. The proposed algorithm fuses the data from an IMU, a camera, and a two-dimensional (2D) light detection and ranging (LiDAR) using extended Kalman filter (EKF) to achieve accurate localization. Among the employed sensors, LiDAR has not received proper attention in the past; mostly because a two-dimensional (2D) LiDAR can only provide pose estimation in its scanning plane, and thus, it cannot obtain a full pose estimation in a 3D environment. A novel method is introduced in this paper that employs a 2D LiDAR to improve the full 3D pose estimation accuracy acquired from an IMU and a camera, and it is shown that this method can significantly improve the precision of the localization algorithm. The proposed approach is evaluated and justified by simulation and real world experiments.

2020 ◽  
pp. short14-1-short14-7
Author(s):  
Anton Poroykov ◽  
Pavel Kalugin ◽  
Sergey Shitov ◽  
Irina Lapitskaya

Fiducial markers are used in vision systems to determine the position of objects in space, reconstruct movement and create augmented reality. Despite the abundance of work on analysis of the accuracy of the estimation of the fiducial markers spatial position, this question remains open. In this paper, we propose the computer modeling of images with ArUco markers for this purpose. The paper presents a modeling algorithm, which was implemented in the form of software based on the OpenCV library. Algorithm is based on projection of three-dimensional points of the marker corners into two-dimensional points using the camera parameters and rendering the marker image in the new two-dimensional coordinates on the modeled image with the use of the perspective transformation obtained from these points. A number of dependencies were obtained by which it is possible to evaluate the error in determining the position depending on markers size. Including the probability of detecting a marker depending on its area on an image.


2013 ◽  
Vol 333-335 ◽  
pp. 268-274
Author(s):  
Jing Jing Wang ◽  
Jian Yu Huang ◽  
Shi Yin Qin

In this paper, a high accuracy and efficiency pose estimation algorithm is proposed for space cooperative targets in RVD based on binocular visual measurement. At first, the scheme of visual measurement toward RVD is presented and the environment conditions and performance requirement are analysed and discussed. Then the relationship of pose estimation with detection and tracking is studied to give an implementing strategy of pose estimation with high accuracy and efficiency. Moreover, the key point is focused on the pose estimation of cooperative targets, in which a stereo vision mapping relation between three dimensionl coordinates of spacial feature points of cooperative targets and their corresponding image coordinates is established, then the least square method is employed to estimate the three-dimensional coordinates of feature points so as to calculate the relative position and attitude between tracking spacecraft and target spacecraft with high precision, finally a series of experimental resluts indicate that the proposed pose estimation algorithm under binocular visual measurement demonstrates well performance in the estimation accuracy, anti-noise and real-time thus can achieve the application requriements of RVD under binocular visual measurement.


2015 ◽  
Vol 6 (1) ◽  
Author(s):  
Kimitoshi Yamazaki ◽  
Kiyohiro Sogen ◽  
Takashi Yamamoto ◽  
Masayuki Inaba

Abstract This paper describes a method for the detection of textureless objects. Our target objects include furniture and home appliances, which have no rich textural features or characteristic shapes. Focusing on the ease of application, we define a model that represents objects in terms of three-dimensional edgels and surfaces. Object detection is performed by superimposing input data on the model. A two-stage algorithm is applied to bring out object poses. Surfaces are used to extract candidates fromthe input data, and edgels are then used to identify the pose of a target object using two-dimensional template matching. Experiments using four real furniture and home appliances were performed to show the feasibility of the proposed method.We suggest the possible applicability in occlusion and clutter conditions.


2016 ◽  
Vol 10 (4) ◽  
pp. 299-307
Author(s):  
Luis Unzueta ◽  
Nerea Aranjuelo ◽  
Jon Goenetxea ◽  
Mikel Rodriguez ◽  
Maria Teresa Linaza

Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

ABSTRACTWalking is the primary mode of human locomotion. Accordingly, people have been interested in studying human gait since at least the fourth century BC. Human gait analysis is now common in many fields of clinical and basic research, but gold standard approaches – e.g., three-dimensional motion capture, instrumented mats or footwear, and wearables – are often expensive, immobile, data-limited, and/or require specialized equipment or expertise for operation. Recent advances in video-based pose estimation have suggested exciting potential for analyzing human gait using only two-dimensional video inputs collected from readily accessible devices (e.g., smartphones, tablets). However, we currently lack: 1) data about the accuracy of video-based pose estimation approaches for human gait analysis relative to gold standard measurement techniques and 2) an available workflow for performing human gait analysis via video-based pose estimation. In this study, we compared a large set of spatiotemporal and sagittal kinematic gait parameters as measured by OpenPose (a freely available algorithm for video-based human pose estimation) and three-dimensional motion capture from trials where healthy adults walked overground. We found that OpenPose performed well in estimating many gait parameters (e.g., step time, step length, sagittal hip and knee angles) while some (e.g., double support time, sagittal ankle angles) were less accurate. We observed that mean values for individual participants – as are often of primary interest in clinical settings – were more accurate than individual step-by-step measurements. We also provide a workflow for users to perform their own gait analyses and offer suggestions and considerations for future approaches.


2021 ◽  
Vol 17 (4) ◽  
pp. e1008935
Author(s):  
Jan Stenum ◽  
Cristina Rossi ◽  
Ryan T. Roemmich

Human gait analysis is often conducted in clinical and basic research, but many common approaches (e.g., three-dimensional motion capture, wearables) are expensive, immobile, data-limited, and require expertise. Recent advances in video-based pose estimation suggest potential for gait analysis using two-dimensional video collected from readily accessible devices (e.g., smartphones). To date, several studies have extracted features of human gait using markerless pose estimation. However, we currently lack evaluation of video-based approaches using a dataset of human gait for a wide range of gait parameters on a stride-by-stride basis and a workflow for performing gait analysis from video. Here, we compared spatiotemporal and sagittal kinematic gait parameters measured with OpenPose (open-source video-based human pose estimation) against simultaneously recorded three-dimensional motion capture from overground walking of healthy adults. When assessing all individual steps in the walking bouts, we observed mean absolute errors between motion capture and OpenPose of 0.02 s for temporal gait parameters (i.e., step time, stance time, swing time and double support time) and 0.049 m for step lengths. Accuracy improved when spatiotemporal gait parameters were calculated as individual participant mean values: mean absolute error was 0.01 s for temporal gait parameters and 0.018 m for step lengths. The greatest difference in gait speed between motion capture and OpenPose was less than 0.10 m s−1. Mean absolute error of sagittal plane hip, knee and ankle angles between motion capture and OpenPose were 4.0°, 5.6° and 7.4°. Our analysis workflow is freely available, involves minimal user input, and does not require prior gait analysis expertise. Finally, we offer suggestions and considerations for future applications of pose estimation for human gait analysis.


2019 ◽  
Vol 9 (12) ◽  
pp. 2478 ◽  
Author(s):  
Jui-Yuan Su ◽  
Shyi-Chyi Cheng ◽  
Chin-Chun Chang ◽  
Jing-Ming Chen

This paper presents a model-based approach for 3D pose estimation of a single RGB image to keep the 3D scene model up-to-date using a low-cost camera. A prelearned image model of the target scene is first reconstructed using a training RGB-D video. Next, the model is analyzed using the proposed multiple principal analysis to label the viewpoint class of each training RGB image and construct a training dataset for training a deep learning viewpoint classification neural network (DVCNN). For all training images in a viewpoint class, the DVCNN estimates their membership probabilities and defines the template of the class as the one of the highest probability. To achieve the goal of scene reconstruction in a 3D space using a camera, using the information of templates, a pose estimation algorithm follows to estimate the pose parameters and depth map of a single RGB image captured by navigating the camera to a specific viewpoint. Obviously, the pose estimation algorithm is the key to success for updating the status of the 3D scene. To compare with conventional pose estimation algorithms which use sparse features for pose estimation, our approach enhances the quality of reconstructing the 3D scene point cloud using the template-to-frame registration. Finally, we verify the ability of the established reconstruction system on publicly available benchmark datasets and compare it with the state-of-the-art pose estimation algorithms. The results indicate that our approach outperforms the compared methods in terms of the accuracy of pose estimation.


Author(s):  
Shwe Myint ◽  
Warit Wichakool

This paper presents a single ended faulted phase-based traveling wave fault localization algorithm for loop distribution grids which is that the sensor can get many reflected signals from the fault point to face the complexity of localization. This localization algorithm uses a band pass filter to remove noise from the corrupted signal. The arriving times of the faulted phase-based filtered signals can be obtained by using phase-modal and discrete wavelet transformations. The estimated fault distance can be calculated using the traveling wave method. The proposed algorithm presents detail level analysis using three detail levels coefficients. The proposed algorithm is tested with MATLAB simulation single line to ground fault in a 10 kV grounded loop distribution system. The simulation result shows that the faulted phase time delay can give better accuracy than using conventional time delays. The proposed algorithm can give fault distance estimation accuracy up to 99.7% with 30 dB contaminated signal-to-noise ratio (SNR) for the nearest lines from the measured terminal.


Sign in / Sign up

Export Citation Format

Share Document