A Multi-sensor Fusion Algorithm for Monitoring the Health Condition of Conveyor Belt in Process Industry

Author(s):  
Qiang Huang ◽  
Changchun Pan ◽  
Haichun Liu
2019 ◽  
Vol 8 (2) ◽  
pp. 6040-6046

Emerging Internet of Things technology plays the major role in modern healthcare not only for sensing but also in recording, communication and display results. The major role of an intensive care unit (ICU) is to improve patient health such as bringing about a change in the treatment or move the patient to a step-down unit etc. Monitoring also shows the extent of observance with a formulated standard of care. In ICU, care should be taken to monitor medical parameters, such as EEG, EMG, BP etc , continuously. In recent health care applications such as real time human health condition monitoring, patient information management etc, IoT technology brings convenience of general practitioner and human, since it is applied in various medical areas, the Body Sensor Network (BSN) is one of the main technology of IoT based medical applications, where a tiny smart and lightweight wireless sensor nodes are used for monitoring patient’s health condition. Hence, this paper proposes BSN integrated with IoT based sensor fusion algorithm to save human life those who are in critical condition. Sensor fusion algorithm is used to detect the criticality of the patient’s health condition and IoT technology is used for communicating information. The testbed has been developed using Rasberry Pi controller, EMG sensor,, BP sensor etc and tested. The tested results also analyzed.


2015 ◽  
Vol 764-765 ◽  
pp. 1319-1323
Author(s):  
Rong Shue Hsiao ◽  
Ding Bing Lin ◽  
Hsin Piao Lin ◽  
Jin Wang Zhou

Pyroelectric infrared (PIR) sensors can detect the presence of human without the need to carry any device, which are widely used for human presence detection in home/office automation systems in order to improve energy efficiency. However, PIR detection is based on the movement of occupants. For occupancy detection, PIR sensors have inherent limitation when occupants remain relatively still. Multisensor fusion technology takes advantage of redundant, complementary, or more timely information from different modal sensors, which is considered an effective approach for solving the uncertainty and unreliability problems of sensing. In this paper, we proposed a simple multimodal sensor fusion algorithm, which is very suitable to be manipulated by the sensor nodes of wireless sensor networks. The inference algorithm was evaluated for the sensor detection accuracy and compared to the multisensor fusion using dynamic Bayesian networks. The experimental results showed that a detection accuracy of 97% in room occupancy can be achieved. The accuracy of occupancy detection is very close to that of the dynamic Bayesian networks.


2011 ◽  
Vol 2011 ◽  
pp. 1-11 ◽  
Author(s):  
Matthew Rhudy ◽  
Yu Gu ◽  
Jason Gross ◽  
Marcello R. Napolitano

Using an Unscented Kalman Filter (UKF) as the nonlinear estimator within a Global Positioning System/Inertial Navigation System (GPS/INS) sensor fusion algorithm for attitude estimation, various methods of calculating the matrix square root were discussed and compared. Specifically, the diagonalization method, Schur method, Cholesky method, and five different iterative methods were compared. Additionally, a different method of handling the matrix square root requirement, the square-root UKF (SR-UKF), was evaluated. The different matrix square root calculations were compared based on computational requirements and the sensor fusion attitude estimation performance, which was evaluated using flight data from an Unmanned Aerial Vehicle (UAV). The roll and pitch angle estimates were compared with independently measured values from a high quality mechanical vertical gyroscope. This manuscript represents the first comprehensive analysis of the matrix square root calculations in the context of UKF. From this analysis, it was determined that the best overall matrix square root calculation for UKF applications in terms of performance and execution time is the Cholesky method.


2021 ◽  
Author(s):  
Langping An ◽  
Xianfei Pan ◽  
Ze Chen ◽  
Mang Wang ◽  
Zheming Tu ◽  
...  

2011 ◽  
Vol 44 (1) ◽  
pp. 11258-11264
Author(s):  
Alessio De Angelis ◽  
Carlo Fischione ◽  
Peter Händel

Author(s):  
Huajie Xu ◽  
Baolin Feng ◽  
Yong Peng

To solve the problem of inaccurate results of vehicle routing prediction caused by a large number of uncertain information collected by different sensors in previous automatic vehicle route prediction algorithms, an automatic vehicle route prediction algorithm based on multi-sensor fusion is studied. The process of fusion of multi-sensor information based on the D-S evidence reasoning fusion algorithm is applied to automatic vehicle route prediction. According to the contribution of a longitudinal acceleration sensor and yaw angular velocity sensor detection information to the corresponding motion model, the basic probability assignment function of each vehicle motion model is obtained; the basic probability assignment function of each motion model is synthesized by using D-S evidence reasoning synthesis formula. The new probability allocation of each motion model is obtained under all evidence and then deduced according to the decision rules. Guided by the current optimal motion model, the optimal motion model at each time is used to accurately predict the vehicle movement route. The simulation results show that the prediction error of the algorithm is less than 4% in the process of 30 minutes of automatic vehicle route prediction.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yipeng Zhu ◽  
Tao Wang ◽  
Shiqiang Zhu

Purpose This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system. Design/methodology/approach Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance. Findings A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person. Originality/value This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.


Sign in / Sign up

Export Citation Format

Share Document