scholarly journals Feature-Based Multi-Class Classification and Novelty Detection for Fault Diagnosis of Industrial Machinery

2021 ◽  
Vol 11 (20) ◽  
pp. 9580
Author(s):  
Francesca Calabrese ◽  
Alberto Regattieri ◽  
Marco Bortolini ◽  
Francesco Gabriele Galizia ◽  
Lorenzo Visentini

Given the strategic role that maintenance assumes in achieving profitability and competitiveness, many industries are dedicating many efforts and resources to improve their maintenance approaches. The concept of the Smart Factory and the possibility of highly connected plants enable the collection of massive data that allow equipment to be monitored continuously and real-time feedback on their health status. The main issue met by industries is the lack of data corresponding to faulty conditions, due to environmental and safety issues that failed machinery might cause, besides the production loss and product quality issues. In this paper, a complete and easy-to-implement procedure for streaming fault diagnosis and novelty detection, using different Machine Learning techniques, is applied to an industrial machinery sub-system. The paper aims to offer useful guidelines to practitioners to choose the best solution for their systems, including a model hyperparameter optimization technique that supports the choice of the best model. Results indicate that the methodology is easy, fast, and accurate. Few training data guarantee a high accuracy and a high generalization ability of the classification models, while the integration of a classifier and an anomaly detector reduces the number of false alarms and the computational time.

2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Glen Debard ◽  
Marc Mertens ◽  
Toon Goedemé ◽  
Tinne Tuytelaars ◽  
Bart Vanrumste

More than thirty percent of persons over 65 years fall at least once a year and are often not able to get up again. Camera-based fall detection systems can help by triggering an alarm when falls occur. Previously we showed that real-life data poses significant challenges, resulting in high false alarm rates. Here, we show three ways to tackle this. First, using a particle filter combined with a person detector increases the robustness of our foreground segmentation, reducing the number of false alarms by 50%. Second, selecting only nonoccluded falls for training further decreases the false alarm rate on average from 31.4 to 26 falls per day. But, most importantly, this improvement is also shown by the doubling of the AUC of the precision-recall curve compared to using all falls. Third, personalizing the detector by adding several days containing only normal activities, no fall incidents, of the monitored person to the training data further increases the robustness of our fall detection system. In one case, this reduced the number of false alarms by a factor of 7 while in another one the sensitivity increased by 17% for an increase of the false alarms of 11%.


2021 ◽  
Author(s):  
Jianyuan Cui ◽  
Bin Ren ◽  
Gang Li ◽  
Zhi-Xin Jia

Abstract The fault diagnosis method (FDM) is widely used in machine operation and maintenance. However, the wrong decision made by the FDM might cause the machine unnecessarily to shut down. To reduce the number of false alarms, this paper proposes a fault re-decision method based on the key-delay technology. The original data from sensors are inputted into the inner fault diagnosis method (IFDM) and the proposed method only employs the results from the IFDM as the input. Then according to the improved key-delay technology and comprehensive consideration of current result and previous results, the re-decision is given. To illustrate the proposed method, a case study on gear is presented in this paper, which shows that the proposed method does decrease the false negative rate.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


Author(s):  
Jianfeng Jiang

Objective: In order to diagnose the analog circuit fault correctly, an analog circuit fault diagnosis approach on basis of wavelet-based fractal analysis and multiple kernel support vector machine (MKSVM) is presented in the paper. Methods: Time responses of the circuit under different faults are measured, and then wavelet-based fractal analysis is used to process the collected time responses for the purpose of generating features for the signals. Kernel principal component analysis (KPCA) is applied to reduce the features’ dimensionality. Afterwards, features are divided into training data and testing data. MKSVM with its multiple parameters optimized by chaos particle swarm optimization (CPSO) algorithm is utilized to construct an analog circuit fault diagnosis model based on the testing data. Results: The proposed analog diagnosis approach is revealed by a four opamp biquad high-pass filter fault diagnosis simulation. Conclusion: The approach outperforms other commonly used methods in the comparisons.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1688
Author(s):  
Luqman Ali ◽  
Fady Alnajjar ◽  
Hamad Al Jassmi ◽  
Munkhjargal Gochoo ◽  
Wasif Khan ◽  
...  

This paper proposes a customized convolutional neural network for crack detection in concrete structures. The proposed method is compared to four existing deep learning methods based on training data size, data heterogeneity, network complexity, and the number of epochs. The performance of the proposed convolutional neural network (CNN) model is evaluated and compared to pretrained networks, i.e., the VGG-16, VGG-19, ResNet-50, and Inception V3 models, on eight datasets of different sizes, created from two public datasets. For each model, the evaluation considered computational time, crack localization results, and classification measures, e.g., accuracy, precision, recall, and F1-score. Experimental results demonstrated that training data size and heterogeneity among data samples significantly affect model performance. All models demonstrated promising performance on a limited number of diverse training data; however, increasing the training data size and reducing diversity reduced generalization performance, and led to overfitting. The proposed customized CNN and VGG-16 models outperformed the other methods in terms of classification, localization, and computational time on a small amount of data, and the results indicate that these two models demonstrate superior crack detection and localization for concrete structures.


Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 226
Author(s):  
Lisa-Marie Vortmann ◽  
Leonid Schwenke ◽  
Felix Putze

Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.


2021 ◽  
Author(s):  
Melissa Latella ◽  
Arjen Luijendijk ◽  
Carlo Camporeale

<p>Coastal sand dunes provide a large variety of ecosystem services, among which the inland protection from marine floods. Nowadays, this protection is fundamental, and its importance will further increase in the future due to the rise of the sea level and storm violence induced by climate change. Despite the crucial role of coastal dunes and their potential application in mitigation strategies, the phenomenon of the coastal squeeze, which is mainly caused by the urban sprawl, is progressively reducing the extents of the areas where dune can freely undergo their dynamics, thus dramatically impairing their capability of providing ecosystem services.</p><p>Aiming to embed the use of satellite images in the study of coastal foredune and beach dynamics, we developed a classification algorithm that uses the satellite images and server-side functions of Google Earth Engine (GEE). The algorithm runs on the GEE Python API and allows the user to retrieve all the available images for the study site and the chosen time period from the selected sensor collection. The algorithm also filters the cloudy and saturated pixels and creates a percentile-composite image over which it applies a random forest classification algorithm. The classification is finally refined by defining a mask for land pixels only. </p><p>According to the provided training data and sensor selection, the algorithm can give different outcomes, ranging from sand and vegetation maps, beach width measurements, and shoreline time evolution visualization. This very versatile tool that can be used in a great variety of applications within the monitoring and understanding of the dune-beach systems and associated coastal ecosystem services. For instance, we show how this algorithm, combined with machine learning techniques and the assimilation of real data, can support the calibration of a coastal model that gives the natural extent of the beach width and that can be, therefore, used to plan restoration activities. </p>


2021 ◽  
Author(s):  
J. Annrose ◽  
N. Herald Anantha Rufus ◽  
C. R. Edwin Selva Rex ◽  
D. Godwin Immanuel

Abstract Bean which is botanically called Phaseolus vulgaris L belongs to the Fabaceae family.During bean disease identification, unnecessary economical losses occur due to the delay of the treatment period, incorrect treatment, and lack of knowledge. The existing deep learning and machine learning techniques met few issues such as high computational complexity, higher cost associated with the training data, more execution time, noise, feature dimensionality, lower accuracy, low speed, etc. To tackle these problems, we have proposed a hybrid deep learning model with an Archimedes optimization algorithm (HDL-AOA) for bean disease classification. In this work, there are five bean classes of which one is a healthy class whereas the remaining four classes indicate different diseases such as Bean halo blight, Pythium diseases, Rhizoctonia root rot, and Anthracnose abnormalities acquired from the Soybean (Large) Data Set.The hybrid deep learning technique is the combination of wavelet packet decomposition (WPD) and long short term memory (LSTM). Initially, the WPD decomposes the input images into four sub-series. For these sub-series, four LSTM networks were developed. During bean disease classification, an Archimedes optimization algorithm (AOA) enhances the classification accuracy for multiple single LSTM networks. MATLAB software implements the HDL-AOA model for bean disease classification. The proposed model accomplishes lower MAPE than other exiting methods. Finally, the proposed HDL-AOA model outperforms excellent classification results using different evaluation measures such as accuracy, specificity, sensitivity, precision, recall, and F-score.


Author(s):  
Jianqun Zhang ◽  
Qing Zhang ◽  
Xianrong Qin ◽  
Yuantao Sun

To identify rolling bearing faults under variable load conditions, a method named DISA-KNN is proposed in this paper, which is based on the strategy of feature extraction-domain adaptation-classification. To be specific, the time-domain and frequency-domain indicators are used for feature extraction. Discriminative and domain invariant subspace alignment (DISA) is used to minimize the data distributions’ discrepancies between the training data (source domain) and testing data (target domain). K-nearest neighbor (KNN) is applied to identify rolling bearing faults. DISA-KNN’s validation is proved by the experimental signal collected under different load conditions. The identification accuracies obtained by the DISA-KNN method are more than 90% on four datasets, including one dataset with 99.5% accuracy. The strength of the proposed method is further highlighted by comparisons with the other 8 methods. These results reveal that the proposed method is promising for the rolling bearing fault diagnosis in real rotating machinery.


Author(s):  
I.F. Lozovskiy

The use of broadband souding signals in radars, which has become real in recent years, leads to a significant reduction in the size of resolution elements in range and, accordingly, in the size of the window in which the training sample is formed, which is used to adapt the detection threshold in signal detection algorithms with a constant level of false alarms. In existing radars, such a window would lead to huge losses. The purpose of the work was to study the most rational options for constructing detectors with a constant level of false alarms in radars with broadband sounding signals. The problem was solved for the Rayleigh distribution of the envelope of the noise and a number of non-Rayleigh laws — Weibull and the lognormal, the appearance of which is associated with a decrease in the number of reflecting elements in the resolution volume. For Rayleigh interference, an algorithm is proposed with a multi-channel in range incoherent signal amplitude storage and normalization to the larger of the two estimates of the interference power in the range segments. The detection threshold in it adapts not only to the interference power, but also to the magnitude of the «power jump» in range, which allows reducing the number of false alarms during sudden changes in the interference power – the increase in the probability of false alarms did not exceed one order of magnitude. In this algorithm, there is a certain increase in losses associated with incoherent accumulation of signals reflected from target elements, and losses can be reduced by certain increasing the size of the distance segments that make up the window. Algorithms for detecting broadband signals against interference with non-Rayleigh laws of distribution of the envelope – Weibull and lognormal, based on the addition of the algorithm for detecting signals by non-linear transformation of sample counts into counts with a Rayleigh distribution, are studied. The structure of the detection algorithm remains unchanged in practice. The options for detectors of narrowband and broadband signals are considered. It was found that, in contrast to algorithms designed for the Rayleigh distribution, these algorithms provide a stable level of false alarms regardless of the values of the parameters of non-Rayleigh interference. To reduce losses due to interference with the distribution of amplitudes according to the Rayleigh law, detectors consisting of two channels are used, in which one of the channels is tuned for interference with the Rayleigh distribution, and the other for lognormal or Weibull interference. Channels are switched according to special distribution type recognition algorithms. In such detectors, however, there is a certain increase in the probability of false alarms in a rather narrow range of non-Rayleigh interference parameters, where their distribution approaches the Rayleigh distribution. It is shown that when using broadband signals, there is a noticeable decrease in detection losses in non-Rayleigh noise due to lower detection thresholds for in range signal amplitudes incoherent storage.


Sign in / Sign up

Export Citation Format

Share Document