scholarly journals A Study on the Real-time Fire Classification Algorithm Based on Stacked LSTM

2021 ◽  
Vol 21 (3) ◽  
pp. 93-104
Author(s):  
Yoseob Heo ◽  
Seongho Seo ◽  
We Shim ◽  
Jongseok Kang

Several researchers have been drawn to the development of fire detector in recent years, to protect people and property from the catastrophic disaster of fire. However, studies related to fire monitoring are affected by some unique characteristics of fire sensor signals, such as time dependence and the complexity of the signal pattern based on the variety of fire types,. In this study, a new deep learning-based approach that accurately classifies various types of fire situations in real-time using data obtained from multidimensional channel fire sensor signals was proposed. The contribution of this study is to develop a stacked-LSTM model that considers the time-series characteristics of sensor data and the complexity of multidimensional channel sensing data to develop a new fire monitoring framework for fire identification based on improving existing fire detectors.

2021 ◽  
Author(s):  
Arturo Magana-Mora ◽  
Mohammad AlJubran ◽  
Jothibasu Ramasamy ◽  
Mohammed AlBassam ◽  
Chinthaka Gooneratne ◽  
...  

Abstract Objective/Scope. Lost circulation events (LCEs) are among the top causes for drilling nonproductive time (NPT). The presence of natural fractures and vugular formations causes loss of drilling fluid circulation. Drilling depleted zones with incorrect mud weights can also lead to drilling induced losses. LCEs can also develop into additional drilling hazards, such as stuck pipe incidents, kicks, and blowouts. An LCE is traditionally diagnosed only when there is a reduction in mud volume in mud pits in the case of moderate losses or reduction of mud column in the annulus in total losses. Using machine learning (ML) for predicting the presence of a loss zone and the estimation of fracture parameters ahead is very beneficial as it can immediately alert the drilling crew in order for them to take the required actions to mitigate or cure LCEs. Methods, Procedures, Process. Although different computational methods have been proposed for the prediction of LCEs, there is a need to further improve the models and reduce the number of false alarms. Robust and generalizable ML models require a sufficiently large amount of data that captures the different parameters and scenarios representing an LCE. For this, we derived a framework that automatically searches through historical data, locates LCEs, and extracts the surface drilling and rheology parameters surrounding such events. Results, Observations, and Conclusions. We derived different ML models utilizing various algorithms and evaluated them using the data-split technique at the level of wells to find the most suitable model for the prediction of an LCE. From the model comparison, random forest classifier achieved the best results and successfully predicted LCEs before they occurred. The developed LCE model is designed to be implemented in the real-time drilling portal as an aid to the drilling engineers and the rig crew to minimize or avoid NPT. Novel/Additive Information. The main contribution of this study is the analysis of real-time surface drilling parameters and sensor data to predict an LCE from a statistically representative number of wells. The large-scale analysis of several wells that appropriately describe the different conditions before an LCE is critical for avoiding model undertraining or lack of model generalization. Finally, we formulated the prediction of LCEs as a time-series problem and considered parameter trends to accurately determine the early signs of LCEs.


Author(s):  
Seonho Kim ◽  
Jungjoon Kim ◽  
Hong-Woo Chun

Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.


Author(s):  
Meenakshi Narayan ◽  
Ann Majewicz Fey

Abstract Sensor data predictions could significantly improve the accuracy and effectiveness of modern control systems; however, existing machine learning and advanced statistical techniques to forecast time series data require significant computational resources which is not ideal for real-time applications. In this paper, we propose a novel forecasting technique called Compact Form Dynamic Linearization Model-Free Prediction (CFDL-MFP) which is derived from the existing model-free adaptive control framework. This approach enables near real-time forecasts of seconds-worth of time-series data due to its basis as an optimal control problem. The performance of the CFDL-MFP algorithm was evaluated using four real datasets including: force sensor readings from surgical needle, ECG measurements for heart rate, and atmospheric temperature and Nile water level recordings. On average, the forecast accuracy of CFDL-MFP was 28% better than the benchmark Autoregressive Integrated Moving Average (ARIMA) algorithm. The maximum computation time of CFDL-MFP was 49.1ms which was 170 times faster than ARIMA. Forecasts were best for deterministic data patterns, such as the ECG data, with a minimum average root mean squared error of (0.2±0.2).


2018 ◽  
Vol 7 (2.7) ◽  
pp. 1107 ◽  
Author(s):  
S Sagar Imambi ◽  
P Vidyullatha ◽  
M V.B.T.Santhi ◽  
P Haran Babu

Electronic equipment and sensors spontaneously create diagnostic data that needs to be stocked and processed in real time. It is not only difficult to keep up with huge amount of data but also reasonably more challenging to analyze it.  Big Data is providing many opportunities for organizations to evolve their processes they try to move beyond regular BI activities like using data to populate reports. Predicting future values is one of the requirements for any business organization. The experimental results shows that time series model with ARIMA (3,0,1)(1,0,0) is best fitted for predicting future values of the sales. 


2019 ◽  
Vol 6 (4) ◽  
pp. 6618-6628 ◽  
Author(s):  
Yi-Fan Zhang ◽  
Peter J. Thorburn ◽  
Wei Xiang ◽  
Peter Fitch

Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder

We present and benchmark FilterNet, a flexible deep learning architecture for time series classification tasks, such as activity recognition via multichannel sensor data. It adapts popular CNN and CNN-LSTM motifs which have excelled in activity recognition benchmarks, implementing them in a many-to-many architecture to markedly improve frame-by-frame accuracy, event segmentation accuracy, model size, and computational efficiency. We propose several model variants, evaluate them alongside other published models using the Opportunity benchmark dataset, demonstrate the effect of model ensembling and of altering key parameters, and quantify the quality of the models’ segmentation of discrete events. We also offer recommendations for use and suggest potential model extensions. FilterNet advances the state of the art in all measured accuracy and speed metrics on the benchmarked dataset, and it can be extensively customized for other applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xuguang Liu

Aiming at the anomaly detection problem in sensor data, traditional algorithms usually only focus on the continuity of single-source data and ignore the spatiotemporal correlation between multisource data, which reduces detection accuracy to a certain extent. Besides, due to the rapid growth of sensor data, centralized cloud computing platforms cannot meet the real-time detection needs of large-scale abnormal data. In order to solve this problem, a real-time detection method for abnormal data of IoT sensors based on edge computing is proposed. Firstly, sensor data is represented as time series; K-nearest neighbor (KNN) algorithm is further used to detect outliers and isolated groups of the data stream in time series. Secondly, an improved DBSCAN (Density Based Spatial Clustering of Applications with Noise) algorithm is proposed by considering spatiotemporal correlation between multisource data. It can be set according to sample characteristics in the window and overcomes the slow convergence problem using global parameters and large samples, then makes full use of data correlation to complete anomaly detection. Moreover, this paper proposes a distributed anomaly detection model for sensor data based on edge computing. It performs data processing on computing resources close to the data source as much as possible, which improves the overall efficiency of data processing. Finally, simulation results show that the proposed method has higher computational efficiency and detection accuracy than traditional methods and has certain feasibility.


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Mahbubul Alam ◽  
Laleh Jalali ◽  
Mahbubul Alam ◽  
Ahmed Farahat ◽  
Chetan Gupta

Abstract—Prognostics aims to predict the degradation of equipment by estimating their remaining useful life (RUL) and/or the failure probability within a specific time horizon. The high demand of equipment prognostics in the industry have propelled researchers to develop robust and efficient prognostics techniques. Among data driven techniques for prognostics, machine learning and deep learning (DL) based techniques, particularly Recurrent Neural Networks (RNNs) have gained significant attention due to their ability of effectively representing the degradation progress by employing dynamic temporal behaviors. RNNs are well known for handling sequential data, especially continuous time series sequential data where the data follows certain pattern. Such data is usually obtained from sensors attached to the equipment. However, in many scenarios sensor data is not readily available and often very tedious to acquire. Conversely, event data is more common and can easily be obtained from the error logs saved by the equipment and transmitted to a backend for further processing. Nevertheless, performing prognostics using event data is substantially more difficult than that of the sensor data due to the unique nature of event data. Though event data is sequential, it differs from other seminal sequential data such as time series and natural language in the following manner, i) unlike time series data, events may appear at any time, i.e., the appearance of events lacks periodicity; ii) unlike natural languages, event data do not follow any specific linguistic rule. Additionally, there may be a significant variability in the event types appearing within the same sequence.  Therefore, this paper proposes an RUL estimation framework to effectively handle the intricate and novel event data. The proposed framework takes discrete events generated by an equipment (e.g., type, time, etc.) as input, and generates for each new event an estimate of the remaining operating cycles in the life of a given component. To evaluate the efficacy of our proposed method, we conduct extensive experiments using benchmark datasets such as the CMAPSS data after converting the time-series data in these datasets to sequential event data. The event data conversion is carried out by careful exploration and application of appropriate transformation techniques to the time series. To the best of our knowledge this is the first time such event-based RUL estimation problem is introduced to the community. Furthermore, we propose several deep learning and machine learning based solution for the event-based RUL estimation problem. Our results suggest that the deep learning models, 1D-CNN, LSTM, and multi-head attention show similar RMSE, MAE and Score performance. Foreseeably, the XGBoost model achieve lower performance compared to the deep learning models since the XGBoost model fails to capture ordering information from the sequence of events. 


Sign in / Sign up

Export Citation Format

Share Document