Automatic Detection of Manufacturing Equipment Cycles Using Time Series

Author(s):  
Jan-Peter Seevers ◽  
Kristina Jurczyk ◽  
Henning Meschede ◽  
Jens Hesselbach ◽  
John W. Sutherland

Abstract Manufacturing industry companies are increasingly interested in using less energy in order to enhance competitiveness and reduce environmental impact. To implement technologies and make decisions that lead to less energy demand, energy/power data are required. All too often, however, energy data are either not available, or available but too aggregated to be useful, or in a form that makes information difficult to access. Attention herein is focused on this last point. As a step toward greater energy information transparency and smart energy-monitoring systems, this paper introduces a novel, robust time series-based approach to automatically detect and analyze the electrical power cycles of manufacturing equipment. A new pattern recognition algorithm including a power peak clustering method is applied to a large real-life sensor data set of various machine tools. With the help of synthetic time series, it is shown that the accuracy of the cycle detection of nearly 100% is realistic, depending on the degree of measurement noise and the measurement sampling rate. Moreover, this paper elucidates how statistical load profiling of manufacturing equipment cycles as well as statistical deviation analyses can be of value for automatic sensor and process fault detection.

AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 48-70
Author(s):  
Wei Ming Tan ◽  
T. Hui Teo

Prognostic techniques attempt to predict the Remaining Useful Life (RUL) of a subsystem or a component. Such techniques often use sensor data which are periodically measured and recorded into a time series data set. Such multivariate data sets form complex and non-linear inter-dependencies through recorded time steps and between sensors. Many current existing algorithms for prognostic purposes starts to explore Deep Neural Network (DNN) and its effectiveness in the field. Although Deep Learning (DL) techniques outperform the traditional prognostic algorithms, the networks are generally complex to deploy or train. This paper proposes a Multi-variable Time Series (MTS) focused approach to prognostics that implements a lightweight Convolutional Neural Network (CNN) with attention mechanism. The convolution filters work to extract the abstract temporal patterns from the multiple time series, while the attention mechanisms review the information across the time axis and select the relevant information. The results suggest that the proposed method not only produces a superior accuracy of RUL estimation but it also trains many folds faster than the reported works. The superiority of deploying the network is also demonstrated on a lightweight hardware platform by not just being much compact, but also more efficient for the resource restricted environment.


2021 ◽  
Author(s):  
Annette Dietmaier ◽  
Thomas Baumann

<p>The European Water Framework Directive (WFD) commits EU member states to achieve a good qualitative and quantitative status of all their water bodies.  WFD provides a list of actions to be taken to achieve the goal of good status.  However, this list disregards the specific conditions under which deep (> 400 m b.g.l.) groundwater aquifers form and exist.  In particular, deep groundwater fluid composition is influenced by interaction with the rock matrix and other geofluids, and may assume a bad status without anthropogenic influences. Thus, a new concept with directions of monitoring and modelling this specific kind of aquifers is needed. Their status evaluation must be based on the effects induced by their exploitation. Here, we analyze long-term real-life production data series to detect changes in the hydrochemical deep groundwater characteristics which might be triggered by balneological and geothermal exploitation. We aim to use these insights to design a set of criteria with which the status of deep groundwater aquifers can be quantitatively and qualitatively determined. Our analysis is based on a unique long-term hydrochemical data set, taken from 8 balneological and geothermal sites in the molasse basin of Lower Bavaria, Germany, and Upper Austria. It is focused on a predefined set of annual hydrochemical concentration values. The data range dates back to 1937. Our methods include developing threshold corridors, within which a good status can be assumed, and developing cluster analyses, correlation, and piper diagram analyses. We observed strong fluctuations in the hydrochemical characteristics of the molasse basin deep groundwater during the last decades. Special interest is put on fluctuations that seem to have a clear start and end date, and to be correlated with other exploitation activities in the region. For example, during the period between 1990 and 2020, bicarbonate and sodium values displayed a clear increase, followed by a distinct dip to below-average values and a subsequent return to average values at site F. During the same time, these values showed striking irregularities at site B. Furthermore, we observed fluctuations in several locations, which come close to disqualifying quality thresholds, commonly used in German balneology. Our preliminary results prove the importance of using long-term (multiple decades) time series analysis to better inform quality and quantity assessments for deep groundwater bodies: most fluctuations would stay undetected within a < 5 year time series window, but become a distinct irregularity when viewed in the context of multiple decades. In the next steps, a quality assessment matrix and threshold corridors will be developed, which take into account methods to identify these fluctuations. This will ultimately aid in assessing the sustainability of deep groundwater exploitation and reservoir management for balneological and geothermal uses.</p>


2020 ◽  
Vol 35 (2) ◽  
pp. 214-222
Author(s):  
Lisa Cenek ◽  
Liubou Klindziuk ◽  
Cindy Lopez ◽  
Eleanor McCartney ◽  
Blanca Martin Burgos ◽  
...  

Circadian rhythms are daily oscillations in physiology and behavior that can be assessed by recording body temperature, locomotor activity, or bioluminescent reporters, among other measures. These different types of data can vary greatly in waveform, noise characteristics, typical sampling rate, and length of recording. We developed 2 Shiny apps for exploration of these data, enabling visualization and analysis of circadian parameters such as period and phase. Methods include the discrete wavelet transform, sine fitting, the Lomb-Scargle periodogram, autocorrelation, and maximum entropy spectral analysis, giving a sense of how well each method works on each type of data. The apps also provide educational overviews and guidance for these methods, supporting the training of those new to this type of analysis. CIRCADA-E (Circadian App for Data Analysis–Experimental Time Series) allows users to explore a large curated experimental data set with mouse body temperature, locomotor activity, and PER2::LUC rhythms recorded from multiple tissues. CIRCADA-S (Circadian App for Data Analysis–Synthetic Time Series) generates and analyzes time series with user-specified parameters, thereby demonstrating how the accuracy of period and phase estimation depends on the type and level of noise, sampling rate, length of recording, and method. We demonstrate the potential uses of the apps through 2 in silico case studies.


2020 ◽  
Vol 12 (1) ◽  
pp. 54-61
Author(s):  
Abdullah M. Almarashi ◽  
Khushnoor Khan

The current study focused on modeling times series using Bayesian Structural Time Series technique (BSTS) on a univariate data-set. Real-life secondary data from stock prices for flying cement covering a period of one year was used for analysis. Statistical results were based on simulation procedures using Kalman filter and Monte Carlo Markov Chain (MCMC). Though the current study involved stock prices data, the same approach can be applied to complex engineering process involving lead times. Results from the current study were compared with classical Autoregressive Integrated Moving Average (ARIMA) technique. For working out the Bayesian posterior sampling distributions BSTS package run with R software was used. Four BSTS models were used on a real data set to demonstrate the working of BSTS technique. The predictive accuracy for competing models was assessed using Forecasts plots and Mean Absolute Percent Error (MAPE). An easyto-follow approach was adopted so that both academicians and practitioners can easily replicate the mechanism. Findings from the study revealed that, for short-term forecasting, both ARIMA and BSTS are equally good but for long term forecasting, BSTS with local level is the most plausible option.


2015 ◽  
Vol 26 (3) ◽  
pp. 407-422 ◽  
Author(s):  
Thomas Weyman-Jones ◽  
Júlia Mendonça Boucinha ◽  
Catarina Feteira Inácio

Purpose – There is a great interest from the European Union in measuring the efficiency of energy use in households, and this is an area where EDP has done research in both data collection and methodology. This paper reports on a survey of electric energy use in Portuguese households, and reviews and extends the analysis of how efficiently households use electrical energy. The purpose of this paper is to evaluate household electrical energy efficiency in different regions using econometric analysis of the survey data. In addition, the same methodology was applied to a time-series data set, to evaluate recent developments in energy efficiency. Design/methodology/approach – The paper describes the application to Portuguese households of a new approach to evaluate energy efficiency, developed by Filippini and Hunt (2011, 2012) in which an econometric energy demand model was estimated to control for exogenous variables determining energy demand. The variation in energy efficiency over time and space could then be estimated by applying econometric efficiency analysis to determine the variation in energy efficiency. Findings – The results obtained allowed the identification of priority regions and consumer bands to reduce inefficiency in electricity consumption. The time-series data set shows that the expected electricity savings from the efficiency measures recently introduced by official authorities were fully realized. Research limitations/implications – This approach gives some guidance on how to introduce electricity saving measures in a more cost effective way. Originality/value – This paper outlines a new procedure for developing useful tools for modelling energy efficiency.


Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 825 ◽  
Author(s):  
Fadi Al Machot ◽  
Mohammed R. Elkobaisi ◽  
Kyandoghere Kyamakya

Due to significant advances in sensor technology, studies towards activity recognition have gained interest and maturity in the last few years. Existing machine learning algorithms have demonstrated promising results by classifying activities whose instances have been already seen during training. Activity recognition methods based on real-life settings should cover a growing number of activities in various domains, whereby a significant part of instances will not be present in the training data set. However, to cover all possible activities in advance is a complex and expensive task. Concretely, we need a method that can extend the learning model to detect unseen activities without prior knowledge regarding sensor readings about those previously unseen activities. In this paper, we introduce an approach to leverage sensor data in discovering new unseen activities which were not present in the training set. We show that sensor readings can lead to promising results for zero-shot learning, whereby the necessary knowledge can be transferred from seen to unseen activities by using semantic similarity. The evaluation conducted on two data sets extracted from the well-known CASAS datasets show that the proposed zero-shot learning approach achieves a high performance in recognizing unseen (i.e., not present in the training dataset) new activities.


Author(s):  
Sanjib Kumar Gupta

This paper addresses the issue of detecting dominating failure modes of a system from a two-dimensional warranty data set by analyzing conditional failure profile of the system. Two testing procedures have been proposed to test whether any of the failure modes is dominating at a particular time interval and whether there is a change of the failure profile from one time interval to another disjoint time interval, conditioning on a given usage layer. Detecting the problematic failure modes early from the conditional failure profile and taking appropriate actions to reduce the conditional failure probability of the system can significantly reduce both tangible and intangible costs of poor reliability in any manufacturing industry. On the other hand, the study of possible changes of conditional failure profiles has a significant role to assess the field performance of items from one time interval to another time interval for a particular choice of usage layer. The utility of this study is explored with the help of a real-life data set.


2019 ◽  
Vol 14 ◽  
pp. 155892501988346 ◽  
Author(s):  
Mine Seçkin ◽  
Ahmet Çağdaş Seçkin ◽  
Aysun Coşkun

Although textile production is heavily automation-based, it is viewed as a virgin area with regard to Industry 4.0. When the developments are integrated into the textile sector, efficiency is expected to increase. When data mining and machine learning studies are examined in textile sector, it is seen that there is a lack of data sharing related to production process in enterprises because of commercial concerns and confidentiality. In this study, a method is presented about how to simulate a production process and how to make regression from the time series data with machine learning. The simulation has been prepared for the annual production plan, and the corresponding faults based on the information received from textile glove enterprise and production data have been obtained. Data set has been applied to various machine learning methods within the scope of supervised learning to compare the learning performances. The errors that occur in the production process have been created using random parameters in the simulation. In order to verify the hypothesis that the errors may be forecast, various machine learning algorithms have been trained using data set in the form of time series. The variable showing the number of faulty products could be forecast very successfully. When forecasting the faulty product parameter, the random forest algorithm has demonstrated the highest success. As these error values have given high accuracy even in a simulation that works with uniformly distributed random parameters, highly accurate forecasts can be made in real-life applications as well.


2021 ◽  
Vol 17 (5) ◽  
pp. 155014772110183
Author(s):  
Ziyue Li ◽  
Qinghua Zeng ◽  
Yuchao Liu ◽  
Jianye Liu ◽  
Lin Li

Image recognition is susceptible to interference from the external environment. It is challenging to accurately and reliably recognize traffic lights in all-time and all-weather conditions. This article proposed an improved vision-based traffic lights recognition algorithm for autonomous driving, integrating deep learning and multi-sensor data fusion assist (MSDA). We introduce a method to obtain the best size of the region of interest (ROI) dynamically, including four aspects. First, based on multi-sensor data (RTK BDS/GPS, IMU, camera, and LiDAR) acquired in a normal environment, we generated a prior map that contained sufficient traffic lights information. And then, by analyzing the relationship between the error of the sensors and the optimal size of ROI, the adaptively dynamic adjustment (ADA) model was built. Furthermore, according to the multi-sensor data fusion positioning and ADA model, the optimal ROI can be obtained to predict the location of traffic lights. Finally, YOLOv4 is employed to extract and identify the image features. We evaluated our algorithm using a public data set and actual city road test at night. The experimental results demonstrate that the proposed algorithm has a relatively high accuracy rate in complex scenarios and can promote the engineering application of autonomous driving technology.


2015 ◽  
Vol 7 (2) ◽  
pp. 289-297 ◽  
Author(s):  
L. Holinde ◽  
T. H. Badewien ◽  
J. A. Freund ◽  
E. V. Stanev ◽  
O. Zielinski

Abstract. The quality of water level time series data strongly varies with periods of high- and low-quality sensor data. In this paper we are presenting the processing steps which were used to generate high-quality water level data from water pressure measured at the Time Series Station (TSS) Spiekeroog. The TSS is positioned in a tidal inlet between the islands of Spiekeroog and Langeoog in the East Frisian Wadden Sea (southern North Sea). The processing steps will cover sensor drift, outlier identification, interpolation of data gaps and quality control. A central step is the removal of outliers. For this process an absolute threshold of 0.25 m 10 min−1 was selected which still keeps the water level increase and decrease during extreme events as shown during the quality control process. A second important feature of data processing is the interpolation of gappy data which is accomplished with a high certainty of generating trustworthy data. Applying these methods a 10-year data set (December 2002–December 2012) of water level information at the TSS was processed resulting in a 7-year time series (2005–2011). Supplementary data are available at doi:10.1594/PANGAEA.843740.


Sign in / Sign up

Export Citation Format

Share Document