Novel Application to Recognize a Breakdown Pressure Event on Time Series Frac Data Vs. an Artificial Intelligence Approach

2021 ◽  
Author(s):  
Alberto Jose Ramirez ◽  
Jessica Graciela Iriarte

Abstract Breakdown pressure is the peak pressure attained when fluid is injected into a borehole until fracturing occurs. Hydraulic fracturing operations are conducted above the breakdown pressure, at which the rock formation fractures and allows fluids to flow inside. This value is essential to obtain formation stress measurements. The objective of this study is to automate the selection of breakdown pressure flags on time series fracture data using a novel algorithm in lieu of an artificial neural network. This study is based on high-frequency treatment data collected from a cloud-based software. The comma separated (.csv) files include treating pressure (TP), slurry rate (SR), and bottomhole proppant concentration (BHPC) with defined start and end time flags. Using feature engineering, the model calculates the rate of change of treating pressure (dtp_1st) slurry rate (dsr_1st), and bottomhole proppant concentration (dbhpc_1st). An algorithm isolates the initial area of the treatment plot before proppant reaches the perforations, the slurry rate is constant, and the pressure increases. The first approach uses a neural network trained with 872 stages to isolate the breakdown pressure area. The expert rule-based approach finds the highest pressure spikes where SR is constant. Then, a refining function finds the maximum treating pressure value and returns its job time as the predicted breakdown pressure flag. Due to the complexity of unconventional reservoirs, the treatment plots may show pressure changes while the slurry rate is constant multiple times during the same stage. The diverse behavior of the breakdown pressure inhibits an artificial neural network's ability to find one "consistent pattern" across the stage. The multiple patterns found through the stage makes it difficult to select an area to find the breakdown pressure value. Testing this complex model worked moderately well, but it made the computational time too high for deployment. On the other hand, the automation algorithm uses rules to find the breakdown pressure value with its location within the stage. The breakdown flag model was validated with 102 stages and tested with 775 stages, returning the location and values corresponding to the highest pressure point. Results show that 86% of the predicted breakdown pressures are within 65 psi of manually picked values. Breakdown pressure recognition automation is important because it saves time and allows engineers to focus on analytical tasks instead of repetitive data-structuring tasks. Automating this process brings consistency to the data across service providers and basins. In some cases, due to its ability to zoom-in, the algorithm recognized breakdown pressures with higher accuracy than subject matter experts. Comparing the results from two different approaches allowed us to conclude that similar or better results with lower running times can be achieved without using complex algorithms.

2021 ◽  
Vol 11 (9) ◽  
pp. 4243
Author(s):  
Chieh-Yuan Tsai ◽  
Yi-Fan Chiu ◽  
Yu-Jen Chen

Nowadays, recommendation systems have been successfully adopted in variant online services such as e-commerce, news, and social media. The recommenders provide users a convenient and efficient way to find their exciting items and increase service providers’ revenue. However, it is found that many recommenders suffered from the cold start (CS) problem where only a small number of ratings are available for some new items. To conquer the difficulties, this research proposes a two-stage neural network-based CS item recommendation system. The proposed system includes two major components, which are the denoising autoencoder (DAE)-based CS item rating (DACR) generator and the neural network-based collaborative filtering (NNCF) predictor. In the DACR generator, a textual description of an item is used as auxiliary content information to represent the item. Then, the DAE is applied to extract the content features from high-dimensional textual vectors. With the compact content features, a CS item’s rating can be efficiently derived based on the ratings of similar non-CS items. Second, the NNCF predictor is developed to predict the ratings in the sparse user–item matrix. In the predictor, both spare binary user and item vectors are projected to dense latent vectors in the embedding layer. Next, latent vectors are fed into multilayer perceptron (MLP) layers for user–item matrix learning. Finally, appropriate item suggestions can be accurately obtained. The extensive experiments show that the DAE can significantly reduce the computational time for item similarity evaluations while keeping the original features’ characteristics. Besides, the experiments show that the proposed NNCF predictor outperforms several popular recommendation algorithms. We also demonstrate that the proposed CS item recommender can achieve up to 8% MAE improvement compared to adding no CS item rating.


Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2710
Author(s):  
Shivam Barwey ◽  
Venkat Raman

High-fidelity simulations of turbulent flames are computationally expensive when using detailed chemical kinetics. For practical fuels and flow configurations, chemical kinetics can account for the vast majority of the computational time due to the highly non-linear nature of multi-step chemistry mechanisms and the inherent stiffness of combustion chemistry. While reducing this cost has been a key focus area in combustion modeling, the recent growth in graphics processing units (GPUs) that offer very fast arithmetic processing, combined with the development of highly optimized libraries for artificial neural networks used in machine learning, provides a unique pathway for acceleration. The goal of this paper is to recast Arrhenius kinetics as a neural network using matrix-based formulations. Unlike ANNs that rely on data, this formulation does not require training and exactly represents the chemistry mechanism. More specifically, connections between the exact matrix equations for kinetics and traditional artificial neural network layers are used to enable the usage of GPU-optimized linear algebra libraries without the need for modeling. Regarding GPU performance, speedup and saturation behaviors are assessed for several chemical mechanisms of varying complexity. The performance analysis is based on trends for absolute compute times and throughput for the various arithmetic operations encountered during the source term computation. The goals are ultimately to provide insights into how the source term calculations scale with the reaction mechanism complexity, which types of reactions benefit the GPU formulations most, and how to exploit the matrix-based formulations to provide optimal speedup for large mechanisms by using sparsity properties. Overall, the GPU performance for the species source term evaluations reveals many informative trends with regards to the effect of cell number on device saturation and speedup. Most importantly, it is shown that the matrix-based method enables highly efficient GPU performance across the board, achieving near-peak performance in saturated regimes.


Author(s):  
Eren Bas ◽  
Erol Egrioglu ◽  
Emine Kölemen

Background: Intuitionistic fuzzy time series forecasting methods have been started to solve the forecasting problems in the literature. Intuitionistic fuzzy time series methods use both membership and non-membership values as auxiliary variables in their models. Because intuitionistic fuzzy sets take into consideration the hesitation margin and so the intuitionistic fuzzy time series models use more information than fuzzy time series models. The background of this study is about intuitionistic fuzzy time series forecasting methods. Objective: The study aims to propose a novel intuitionistic fuzzy time series method. It is expected that the proposed method will produce better forecasts than some selected benchmarks. Method: The proposed method uses bootstrapped combined Pi-Sigma artificial neural network and intuitionistic fuzzy c-means. The combined Pi-Sigma artificial neural network is proposed to model the intuitionistic fuzzy relations. Results and Conclusion: The proposed method is applied to different sets of SP&500 stock exchange time series. The proposed method can provide more accurate forecasts than established benchmarks for the SP&500 stock exchange time series. The most important contribution of the proposed method is that it creates statistical inference: probabilistic forecasting, confidence intervals and the empirical distribution of the forecasts. Moreover, the proposed method is better than the selected benchmarks for the SP&500 data set.


2018 ◽  
Vol 8 (9) ◽  
pp. 1613 ◽  
Author(s):  
Utku Kose

The prediction of future events based on available time series measurements is a relevant research area specifically for healthcare, such as prognostics and assessments of intervention applications. A measure of brain dynamics, electroencephalogram time series, are routinely analyzed to obtain information about current, as well as future, mental states, and to detect and diagnose diseases or environmental factors. Due to their chaotic nature, electroencephalogram time series require specialized techniques for effective prediction. The objective of this study was to introduce a hybrid system developed by artificial intelligence techniques to deal with electroencephalogram time series. Both artificial neural networks and the ant-lion optimizer, which is a recent intelligent optimization technique, were employed to comprehend the related system and perform some prediction applications over electroencephalogram time series. According to the obtained findings, the system can successfully predict the future states of target time series and it even outperforms some other hybrid artificial neural network-based systems and alternative time series prediction approaches from the literature.


Author(s):  
Sanjeev Karmakar ◽  
Manoj Kumar Kowar ◽  
Pulak Guhathakurta

The objective of this study is to expand and evaluate the back-propagation artificial neural network (BPANN) and to apply in the identification of internal dynamics of very high dynamic system such long-range total rainfall data time series. This objective is considered via comprehensive review of literature (1978-2011). It is found that, detail of discussion concerning the architecture of ANN for the same is rarely visible in the literature; however various applications of ANN are available. The detail architecture of BPANN with its parameters, i.e., learning rate, number of hidden layers, number of neurons in hidden layers, number of input vectors in input layer, initial and optimized weights etc., designed learning algorithm, observations of local and global minima, and results have been discussed. It is observed that obtaining global minima is almost complicated and always a temporal nervousness. However, achievement of global minima for the period of the training has been discussed. It is found that, the application of the BPANN on identification for internal dynamics and prediction for the long-range total annual rainfall has produced good results. The results are explained through the strong association between rainfall predictors i.e., climate parameter (independent parameter) and total annual rainfall (dependent parameter) are presented in this paper as well.


Sign in / Sign up

Export Citation Format

Share Document