A Multi-Sensor Fusion Algorithm for Pedestrian Navigation Using Factor Graphs

Author(s):  
Langping An ◽  
Xianfei Pan ◽  
Ze Chen ◽  
Mang Wang ◽  
Zheming Tu ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1390
Author(s):  
Tomasz Ursel ◽  
Michał Olinski

This article aims to develop a system capable of estimating the displacement of a moving object with the usage of a relatively cheap and easy to apply sensors. There is a growing need for such systems, not only for robots, but also, for instance, pedestrian navigation. In this paper, the theory for this idea, including data postprocessing algorithms for a MEMS accelerometer and an optical flow sensor (OFS), as well as the developed complementary filter applied for sensor fusion, are presented. In addition, a vital part of the accelerometer’s algorithm, the zero velocity states detection, is implemented. It is based on analysis of the acceleration’s signal and further application of acceleration symmetrization, greatly improving the obtained displacement. A test stand with a linear guide and motor enabling imposing a specified linear motion is built. The results of both sensors’ testing suggest that the displacement estimated by each of them is highly correct. Fusion of the sensors’ data gives even better outcomes, especially in cases with external disturbance of OFS. The comparative evaluation of estimated linear displacements, in each case related to encoder data, confirms the algorithms’ operation correctness and proves the chosen sensors’ usefulness in the development of a linear displacement measuring system.


2015 ◽  
Vol 764-765 ◽  
pp. 1319-1323
Author(s):  
Rong Shue Hsiao ◽  
Ding Bing Lin ◽  
Hsin Piao Lin ◽  
Jin Wang Zhou

Pyroelectric infrared (PIR) sensors can detect the presence of human without the need to carry any device, which are widely used for human presence detection in home/office automation systems in order to improve energy efficiency. However, PIR detection is based on the movement of occupants. For occupancy detection, PIR sensors have inherent limitation when occupants remain relatively still. Multisensor fusion technology takes advantage of redundant, complementary, or more timely information from different modal sensors, which is considered an effective approach for solving the uncertainty and unreliability problems of sensing. In this paper, we proposed a simple multimodal sensor fusion algorithm, which is very suitable to be manipulated by the sensor nodes of wireless sensor networks. The inference algorithm was evaluated for the sensor detection accuracy and compared to the multisensor fusion using dynamic Bayesian networks. The experimental results showed that a detection accuracy of 97% in room occupancy can be achieved. The accuracy of occupancy detection is very close to that of the dynamic Bayesian networks.


2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Matthew Rhudy ◽  
Yu Gu ◽  
Jason Gross ◽  
Marcello R. Napolitano

Using an Unscented Kalman Filter (UKF) as the nonlinear estimator within a Global Positioning System/Inertial Navigation System (GPS/INS) sensor fusion algorithm for attitude estimation, various methods of calculating the matrix square root were discussed and compared. Specifically, the diagonalization method, Schur method, Cholesky method, and five different iterative methods were compared. Additionally, a different method of handling the matrix square root requirement, the square-root UKF (SR-UKF), was evaluated. The different matrix square root calculations were compared based on computational requirements and the sensor fusion attitude estimation performance, which was evaluated using flight data from an Unmanned Aerial Vehicle (UAV). The roll and pitch angle estimates were compared with independently measured values from a high quality mechanical vertical gyroscope. This manuscript represents the first comprehensive analysis of the matrix square root calculations in the context of UKF. From this analysis, it was determined that the best overall matrix square root calculation for UKF applications in terms of performance and execution time is the Cholesky method.


2011 ◽  
Vol 44 (1) ◽  
pp. 11258-11264
Author(s):  
Alessio De Angelis ◽  
Carlo Fischione ◽  
Peter Händel

Author(s):  
Huajie Xu ◽  
Baolin Feng ◽  
Yong Peng

To solve the problem of inaccurate results of vehicle routing prediction caused by a large number of uncertain information collected by different sensors in previous automatic vehicle route prediction algorithms, an automatic vehicle route prediction algorithm based on multi-sensor fusion is studied. The process of fusion of multi-sensor information based on the D-S evidence reasoning fusion algorithm is applied to automatic vehicle route prediction. According to the contribution of a longitudinal acceleration sensor and yaw angular velocity sensor detection information to the corresponding motion model, the basic probability assignment function of each vehicle motion model is obtained; the basic probability assignment function of each motion model is synthesized by using D-S evidence reasoning synthesis formula. The new probability allocation of each motion model is obtained under all evidence and then deduced according to the decision rules. Guided by the current optimal motion model, the optimal motion model at each time is used to accurately predict the vehicle movement route. The simulation results show that the prediction error of the algorithm is less than 4% in the process of 30 minutes of automatic vehicle route prediction.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yipeng Zhu ◽  
Tao Wang ◽  
Shiqiang Zhu

Purpose This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system. Design/methodology/approach Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance. Findings A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person. Originality/value This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.


Sign in / Sign up

Export Citation Format

Share Document