Multiple Sound Sensors And Fusion In Modern CNN-Based Machine State Prediction
Abstract In the new era of manufacturing with Industry 4.0, Smart Manufacturing (SM) is growing in popularity as a potential for the factory of the future. A critical component of SM is effective machine monitoring. Legacy machines indirect monitoring using Internet of Things (IoT) sensors are preferred instead of modifying hardware directly. Machine tools are composed of rotary components, resulting in machine tools emitting acoustic and vibratory signals. However, sound data cannot easily function as a direct representation for machine status due to its noise, variable time course, and irregular sampling. In this paper, we attempt to bridge this gap through machine learning techniques and auditory monitoring of auxiliary components (i.e., coolant, chip conveyor, and mist collector) as well as the main spindle running state of machine tools. Multi-label classification and Convolutional Neural Network (CNN) were utilized to train models for monitoring machine tools from the sound features. An external microphone and three internal sound sensors were attached to both mill and lathe machines. As a sound feature, Mel-frequency cepstrum (MFCC) features were extracted. The classification task performance was compared between each sensor location and early sensor fusion. The results showed that the sensor fusion approach resulted in the highest F1 score on both machine system.