Multi-Modal Data Fusion Techniques and Applications

2009 ◽  
pp. 213-237 ◽  
Author(s):  
Alessio Dore ◽  
Matteo Pinasco ◽  
Carlo S. Regazzoni
Keyword(s):  
2012 ◽  
Vol 12 (06) ◽  
pp. 1250052 ◽  
Author(s):  
YUEQUAN BAO ◽  
YONG XIA ◽  
HUI LI ◽  
YOU-LIN XU ◽  
PENG ZHANG

A huge number of data can be obtained continuously from a number of sensors in long-term structural health monitoring (SHM). Different sets of data measured at different times may lead to inconsistent monitoring results. In addition, structural responses vary with the changing environmental conditions, particularly temperature. The variation in structural responses caused by temperature changes may mask the variation caused by structural damages. Integration and interpretation of various types of data are critical to the effective use of SHM systems for structural condition assessment and damage detection. A data fusion-based damage detection approach under varying temperature conditions is presented. The Bayesian-based damage detection technique, in which both temperature and structural parameters are the variables of the modal properties (frequencies and mode shapes), is developed. Accordingly, the probability density functions of the modal data are derived for damage detection. The damage detection results from each set of modal data and temperature data may be inconsistent because of uncertainties. The Dempster–Shafer (D–S) evidence theory is then employed to integrate the individual damage detection results from the different data sets at different times to obtain a consistent decision. An experiment on a two-story portal frame is conducted to demonstrate the effectiveness of the proposed method, with consideration on model uncertainty, measurement noise, and temperature effect. The damage detection results obtained by combining the damage basic probability assignments from each set of test data are more accurate than those obtained from each test data separately. Eliminating the temperature effect on the vibration properties can improve the damage detection accuracy. In particular, the proposed technique can detect even the slightest damage that is not detected by common damage detection methods in which the temperature effect is not eliminated.


Author(s):  
Peter Zulch ◽  
Marcello Distasio ◽  
Todd Cushman ◽  
Brian Wilson ◽  
Ben Hart ◽  
...  

Author(s):  
Yinhuan ZHANG ◽  
Qinkun XIAO ◽  
Chaoqin CHU ◽  
Heng XING

The multi-modal data fusion method based on IA-net and CHMM technical proposed is designed to solve the problem that the incompleteness of target behavior information in complex family environment leads to the low accuracy of human behavior recognition.The two improved neural networks(STA-ResNet50、STA-GoogleNet)are combined with LSTM to form two IA-Nets respectively to extract RGB and skeleton modal behavior features in video. The two modal feature sequences are input CHMM to construct the probability fusion model of multi-modal behavior recognition.The experimental results show that the human behavior recognition model proposed in this paper has higher accuracy than the previous fusion methods on HMDB51 and UCF101 datasets. New contributions: attention mechanism is introduced to improve the efficiency of video target feature extraction and utilization. A skeleton based feature extraction framework is proposed, which can be used for human behavior recognition in complex environment. In the field of human behavior recognition, probability theory and neural network are cleverly combined and applied, which provides a new method for multi-modal information fusion.


Sign in / Sign up

Export Citation Format

Share Document