A Combined Wavelet Transform and Recurrent Neural Networks Scheme for Identification of Hydrocarbon Reservoir Systems From Well Testing Signals

2020 ◽  
Vol 143 (1) ◽  
Author(s):  
Mehrafarin Moghimihanjani ◽  
Behzad Vaferi

Abstract Oil and gas are likely the most important sources for producing heat and energy in both domestic and industrial applications. Hydrocarbon reservoirs that contain these fuels are required to be characterized to exploit the maximum amount of their fluids. Well testing analysis is a valuable tool for the characterization of hydrocarbon reservoirs. Handling and analysis of long-term and noise-contaminated well testing signals using the traditional methods is a challenging task. Therefore, in this study, a novel paradigm that combines wavelet transform (WT) and recurrent neural networks (RNN) is proposed for analyzing the long-term well testing signals. The WT not only reduces the dimension of the pressure derivative (PD) signals during feature extraction but it efficiently removes noisy data. The RNN identifies reservoir type and its boundary condition from the extracted features by WT. Results confirmed that the five-level decomposition of the PD signals by the Bior 1.1 filter provides the best features for classification. A two-layer RNN model with nine hidden neurons correctly detects 3202 out of 3298 hydrocarbon reservoir systems. Performance of the proposed approach is checked using smooth, noisy, and real field well testing signals. Moreover, a comparison is done among predictive accuracy of WT-RNN, traditional RNN, conventional multilayer perceptron (MLP) neural networks, and couple WT-MLP approaches. The results confirm that the coupled WT-RNN paradigm is superior to the other considered smart machines.

Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2178
Author(s):  
Yi-Chung Chen ◽  
Tsu-Chiang Lei ◽  
Shun Yao ◽  
Hsin-Ping Wang

Airborne particulate matter 2.5 (PM2.5) can have a profound effect on the health of the population. Many researchers have been reporting highly accurate numerical predictions based on raw PM2.5 data imported directly into deep learning models; however, there is still considerable room for improvement in terms of implementation costs due to heavy computational overhead. From the perspective of environmental science, PM2.5 values in a given location can be attributed to local sources as well as external sources. Local sources tend to have a dramatic short-term impact on PM2.5 values, whereas external sources tend to have more subtle but longer-lasting effects. In the presence of PM2.5 from both sources at the same time, this combination of effects can undermine the predictive accuracy of the model. This paper presents a novel combinational Hammerstein recurrent neural network (CHRNN) to enhance predictive accuracy and overcome the heavy computational and monetary burden imposed by deep learning models. The CHRNN comprises a based-neural network tasked with learning gradual (long-term) fluctuations in conjunction with add-on neural networks to deal with dramatic (short-term) fluctuations. The CHRNN can be coupled with a random forest model to determine the degree to which short-term effects influence long-term outcomes. We also developed novel feature selection and normalization methods to enhance prediction accuracy. Using real-world measurement data of air quality and PM2.5 datasets from Taiwan, the precision of the proposed system in the numerical prediction of PM2.5 levels was comparable to that of state-of-the-art deep learning models, such as deep recurrent neural networks and long short-term memory, despite far lower implementation costs and computational overhead.


2008 ◽  
Vol 71 (13-15) ◽  
pp. 2481-2488 ◽  
Author(s):  
Anton Maximilian Schaefer ◽  
Steffen Udluft ◽  
Hans-Georg Zimmermann

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 182296-182308 ◽  
Author(s):  
Ao Yu ◽  
Hui Yang ◽  
Ting Xu ◽  
Baoguo Yu ◽  
Qiuyan Yao ◽  
...  

2020 ◽  
Vol 34 (04) ◽  
pp. 4115-4122
Author(s):  
Kyle Helfrich ◽  
Qiang Ye

Several variants of recurrent neural networks (RNNs) with orthogonal or unitary recurrent matrices have recently been developed to mitigate the vanishing/exploding gradient problem and to model long-term dependencies of sequences. However, with the eigenvalues of the recurrent matrix on the unit circle, the recurrent state retains all input information which may unnecessarily consume model capacity. In this paper, we address this issue by proposing an architecture that expands upon an orthogonal/unitary RNN with a state that is generated by a recurrent matrix with eigenvalues in the unit disc. Any input to this state dissipates in time and is replaced with new inputs, simulating short-term memory. A gradient descent algorithm is derived for learning such a recurrent matrix. The resulting method, called the Eigenvalue Normalized RNN (ENRNN), is shown to be highly competitive in several experiments.


1996 ◽  
Vol 7 (6) ◽  
pp. 1329-1338 ◽  
Author(s):  
Tsungnan Lin ◽  
B.G. Horne ◽  
P. Tino ◽  
C.L. Giles

Sign in / Sign up

Export Citation Format

Share Document