Machine Learning Techniques for Fault Diagnosis of Rotating Machines Using Spectrum Image of Vibration Orbits

2020 ◽  
Author(s):  
Clayton Eduardo Rodrigues ◽  
Cairo Lúcio Nascimento Júnior ◽  
Domingos Alves Rade

A comparative analysis of machine learning techniques for rotating machine faults diagnosis based on vibration spectra images is presented. The feature extraction of dierent types of faults, such as unbalance, misalignment, shaft crack, rotor-stator rub, and hydrodynamic instability, is performed by processing the spectral image of vibration orbits acquired during the rotating machine run-up. The classiers are trained with simulation data and tested with both simulation and experimental data. The experimental data are obtained from measurements performed on an rotor-disk system test rig supported on hydrodynamic bearings. To generate the simulated data, a numerical model of the rotating system is developed using the Finite Element Method (FEM). Deep learning, ensemble and traditional classication methods are evaluated. The ability of the methods to generalize the image classication is evaluated based on their performance in classifying experimental test patterns that were not used during training. The obtained results suggest that despite considerable computational cost, the method based on Convolutional Neural Network (CNN) presents the best performance for classication of faults based on spectral images.

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1511
Author(s):  
Saeed Mian Qaisar ◽  
Alaeddine Mihoub ◽  
Moez Krichen ◽  
Humaira Nisar

The usage of wearable gadgets is growing in the cloud-based health monitoring systems. The signal compression, computational and power efficiencies play an imperative part in this scenario. In this context, we propose an efficient method for the diagnosis of cardiovascular diseases based on electrocardiogram (ECG) signals. The method combines multirate processing, wavelet decomposition and frequency content-based subband coefficient selection and machine learning techniques. Multirate processing and features selection is used to reduce the amount of information processed thus reducing the computational complexity of the proposed system relative to the equivalent fixed-rate solutions. Frequency content-dependent subband coefficient selection enhances the compression gain and reduces the transmission activity and computational cost of the post cloud-based classification. We have used MIT-BIH dataset for our experiments. To avoid overfitting and biasness, the performance of considered classifiers is studied by using five-fold cross validation (5CV) and a novel proposed partial blind protocol. The designed method achieves more than 12-fold computational gain while assuring an appropriate signal reconstruction. The compression gain is 13 times compared to fixed-rate counterparts and the highest classification accuracies are 97.06% and 92.08% for the 5CV and partial blind cases, respectively. Results suggest the feasibility of detecting cardiac arrhythmias using the proposed approach.


2021 ◽  
Author(s):  
Thiago Abdo ◽  
Fabiano Silva

The purpose of this paper is to analyze the use of different machine learning approaches and algorithms to be integrated as an automated assistance on a tool to aid the creation of new annotated datasets. We evaluate how they scale in an environment without dedicated machine learning hardware. In particular, we study the impact over a dataset with few examples and one that is being constructed. We experiment using deep learning algorithms (Bert) and classical learning algorithms with a lower computational cost (W2V and Glove combined with RF and SVM). Our experiments show that deep learning algorithms have a performance advantage over classical techniques. However, deep learning algorithms have a high computational cost, making them inadequate to an environment with reduced hardware resources. Simulations using Active and Iterative machine learning techniques to assist the creation of new datasets are conducted. For these simulations, we use the classical learning algorithms because of their computational cost. The knowledge gathered with our experimental evaluation aims to support the creation of a tool for building new text datasets.


2020 ◽  
Vol 10 (19) ◽  
pp. 6896
Author(s):  
Paloma Tirado-Martin ◽  
Judith Liu-Jimenez ◽  
Jorge Sanchez-Casanova ◽  
Raul Sanchez-Reillo

Currently, machine learning techniques are successfully applied in biometrics and Electrocardiogram (ECG) biometrics specifically. However, not many works deal with different physiological states in the user, which can provide significant heart rate variations, being these a key matter when working with ECG biometrics. Techniques in machine learning simplify the feature extraction process, where sometimes it can be reduced to a fixed segmentation. The applied database includes visits taken in two different days and three different conditions (sitting down, standing up after exercise), which is not common in current public databases. These characteristics allow studying differences among users under different scenarios, which may affect the pattern in the acquired data. Multilayer Perceptron (MLP) is used as a classifier to form a baseline, as it has a simple structure that has provided good results in the state-of-the-art. This work studies its behavior in ECG verification by using QRS complexes, finding its best hyperparameter configuration through tuning. The final performance is calculated considering different visits for enrolling and verification. Differentiation in the QRS complexes is also tested, as it is already required for detection, proving that applying a simple first differentiation gives a good result in comparison to state-of-the-art similar works. Moreover, it also improves the computational cost by avoiding complex transformations and using only one type of signal. When applying different numbers of complexes, the best results are obtained when 100 and 187 complexes in enrolment, obtaining Equal Error Rates (EER) that range between 2.79–4.95% and 2.69–4.71%, respectively.


2019 ◽  
Vol 9 (24) ◽  
pp. 5502 ◽  
Author(s):  
Baher Azzam ◽  
Freia Harzendorf ◽  
Ralf Schelenz ◽  
Walter Holweger ◽  
Georg Jacobs

White etching crack (WEC) failure is a failure mode that affects bearings in many applications, including wind turbine gearboxes, where it results in high, unplanned maintenance costs. WEC failure is unpredictable as of now, and its root causes are not yet fully understood. While WECs were produced under controlled conditions in several investigations in the past, converging the findings from the different combinations of factors that led to WECs in different experiments remains a challenge. This challenge is tackled in this paper using machine learning (ML) models that are capable of capturing patterns in high-dimensional data belonging to several experiments in order to identify influential variables to the risk of WECs. Three different ML models were designed and applied to a dataset containing roughly 700 high- and low-risk oil compositions to identify the constituting chemical compounds that make a given oil composition high-risk with respect to WECs. This includes the first application of a purpose-built neural network-based feature selection method. Out of 21 compounds, eight were identified as influential by models based on random forest and artificial neural networks. Association rules were also mined from the data to investigate the relationship between compound combinations and WEC risk, leading to results supporting those of previous analyses. In addition, the identified compound with the highest influence was proved in a separate investigation involving physical tests to be of high WEC risk. The presented methods can be applied to other experimental data where a high number of measured variables potentially influence a certain outcome and where there is a need to identify variables with the highest influence.


2021 ◽  
Vol 33 (4) ◽  
pp. 227-240
Author(s):  
Daria Igorevna Romanova

We calibrate the k − ε turbulence model for free surface flows in the channel or on the slope. To calibrate the turbulence model, an experiment is carried out in an inclined rectangular research tray. In the experiment, the pressure values in the flow are measured at different distances from the bottom using a Pitot tube; after transforming data, the flow velocity profile is obtained. The k − ε turbulence model is calibrated based on experimental data using the Nelder-Mead optimization algorithm. The calibrated turbulence model is then used to calculate the outburst of a lake near the glacier Maliy Azau on the Elbrus (Central Caucasus).


2019 ◽  
Vol 5 (4) ◽  
pp. eaau6792 ◽  
Author(s):  
Jordan Hoffmann ◽  
Yohai Bar-Sinai ◽  
Lisa M. Lee ◽  
Jovana Andrejevic ◽  
Shruti Mishra ◽  
...  

Machine learning has gained widespread attention as a powerful tool to identify structure in complex, high-dimensional data. However, these techniques are ostensibly inapplicable for experimental systems where data are scarce or expensive to obtain. Here, we introduce a strategy to resolve this impasse by augmenting the experimental dataset with synthetically generated data of a much simpler sister system. Specifically, we study spontaneously emerging local order in crease networks of crumpled thin sheets, a paradigmatic example of spatial complexity, and show that machine learning techniques can be effective even in a data-limited regime. This is achieved by augmenting the scarce experimental dataset with inexhaustible amounts of simulated data of rigid flat-folded sheets, which are simple to simulate and share common statistical properties. This considerably improves the predictive power in a test problem of pattern completion and demonstrates the usefulness of machine learning in bench-top experiments where data are good but scarce.


Author(s):  
Vladimir Cherepanov ◽  
Elzbieta Richter-Was ◽  
Zbigniew Andrzej Was

Status of \tauτ lepton decay Monte Carlo generator TAUOLA, and its main recent applications are reviewed. It is underlined, that in recent efforts on development of new hadronic currents, the multi-dimensional nature of distributions of the experimental data must be taken with a great care. Studies for H \to \tau\tau ; \tau \to hadronsH→ττ;τ→hadrons indeed demonstrate that multi-dimensional nature of distributions is important and available for evaluation of observables where \tauτ leptons are used to constrain experimental data. For that part of the presentation, use of the TAUOLA program for phenomenology of HH and ZZ decays at LHC is discussed, in particular in the context of the Higgs boson parity measurements with the use of Machine Learning techniques. Some additions, relevant for QED lepton pair emission and electroweak corrections are mentioned as well.


2019 ◽  
Vol 211 ◽  
pp. 04006 ◽  
Author(s):  
Amy Lovell ◽  
Arvind Mohan ◽  
Patrick Talou ◽  
Michael Chertkov

Having accurate measurements of fission observables is important for a variety of applications, ranging from energy to non-proliferation, defense to astrophysics. Because not all of these data can be measured, it is necessary to be able to accurately calculate these observables as well. In this work, we exploit Monte Carlo and machine learning techniques to reproduce mass and kinetic energy yields, for phenomenological models and in a model-free way. We begin with the spontaneous fission of 252Cf, where there is abundant experimental data, to validate our approach, with the ultimate goal of creating a global yield model in order to predict quantities where data are not currently available.


Sign in / Sign up

Export Citation Format

Share Document