Multi-Sensor Data Fusion for Rotating Machinery Fault Diagnosis Using Residual Convolutional Neural Network

2021 ◽  
Author(s):  
Tingli Xie ◽  
Xufeng Huang ◽  
Seung-Kyum Choi

Abstract Diagnosis of mechanical faults in the manufacturing systems is critical for ensuring safety and saving cost. With the development of data transmission and sensor technologies, the measuring systems can easily acquire multi-sensor and massive data. The traditional fault diagnosis methods usually depend on the features extracted by experts manually. The feature extraction process is usually time-consuming and laborious, which has a significant impact on the final results. Although Deep-Learning (DL) provides an end-to-end way to address the drawbacks of traditional methods, it is necessary to do deep research on an intelligent fault diagnosis method based on Multi-Sensor Data and Data Fusion. In this project, a novel intelligent diagnosis method based on Multi-Sensor Data Fusion and Convolutional Neural Network (CNN) is explored, which can automatically extract features from raw signals and achieve superior recognition performance. Firstly, a Multi-Signals-to-RGB-Image conversion method based on Principal Component Analysis (PCA) is applied to fuse multi-signal data into three-channel RGB images, which can eliminate the effect of handcrafted features and obtain the feature-level fused information. Then, the improved CNN with residual networks and the Leaky Rectified Linear Unit (LReLU) is defined and trained by the training samples, which can balance the relationship between computational cost and accuracy. After that, the testing data are fed into CNN to obtain the final diagnosis results. Two datasets, including the KAT bearing dataset and Gearbox dataset, are conducted to verify the effectiveness of the proposed method. The comprehensive comparison and analysis with widely used algorithms are also performed. The results demonstrate that the proposed method can detect different fault types and outperform other methods in terms of classification accuracy. For the KAT bearing dataset and Gearbox dataset, the proposed method’s average prediction accuracy is as high as 99.99% and 99.98%, which demonstrates that the proposed method achieves more reliable results than other DL-based methods.

2012 ◽  
Vol 466-467 ◽  
pp. 1222-1226
Author(s):  
Bin Ma ◽  
Lin Chong Hao ◽  
Wan Jiang Zhang ◽  
Jing Dai ◽  
Zhong Hua Han

In this paper, we presented an equipment fault diagnosis method based on multi-sensor data fusion, in order to solve the problems such as uncertainty, imprecision and low reliability caused by using a single sensor to diagnose the equipment faults. We used a variety of sensors to collect the data for diagnosed objects and fused the data by using D-S evidence theory, according to the change of confidence and uncertainty, diagnosed whether the faults happened. Experimental results show that, the D-S evidence theory algorithm can reduce the uncertainty of the results of fault diagnosis, improved diagnostic accuracy and reliability, and compared with the fault diagnosis using a single sensor, this method has a better effect.


2014 ◽  
Vol 678 ◽  
pp. 238-241 ◽  
Author(s):  
Xiang Zhong Meng ◽  
Hui Long Liu ◽  
Zi Sheng Hou

In this paper, for the frequent faults problems of the mine air compressor main motor, we use the BP neural network learning algorithms on the basis of the theory of multi-sensor data fusion. The collected characteristic signals were processed by the method of data fusion, and we could get the current motor fault state value. Compared to the experimental results, it can realize the fault diagnosis of mine equipment obviously.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1434 ◽  
Author(s):  
Minle Li ◽  
Yihua Hu ◽  
Nanxiang Zhao ◽  
Qishu Qian

Three-dimensional (3D) object detection has important applications in robotics, automatic loading, automatic driving and other scenarios. With the improvement of devices, people can collect multi-sensor/multimodal data from a variety of sensors such as Lidar and cameras. In order to make full use of various information advantages and improve the performance of object detection, we proposed a Complex-Retina network, a convolution neural network for 3D object detection based on multi-sensor data fusion. Firstly, a unified architecture with two feature extraction networks was designed, and the feature extraction of point clouds and images from different sensors realized synchronously. Then, we set a series of 3D anchors and projected them to the feature maps, which were cropped into 2D anchors with the same size and fused together. Finally, the object classification and 3D bounding box regression were carried out on the multipath of fully connected layers. The proposed network is a one-stage convolution neural network, which achieves the balance between the accuracy and speed of object detection. The experiments on KITTI datasets show that the proposed network is superior to the contrast algorithms in average precision (AP) and time consumption, which shows the effectiveness of the proposed network.


Sign in / Sign up

Export Citation Format

Share Document