Deep Neural Network Trains To Pinpoint Microseismic Events in Real Time

2021 ◽  
Vol 73 (03) ◽  
pp. 31-33
Author(s):  
Pat Davis Szymczak

Real-time analysis of microseismic events using data gathered during hydraulic fracturing can give engineers critical feedback on whether a particular fracturing job has achieved its goal of increasing porosity and permeability and boosting stimulated reservoir volume (SRV). Currently, no perfect way exists to understand clearly if a fracturing operation has had the intended effect. Engineers collect data, but the methods used to gather it, manually sort it, and analyze it provide an inconclusive picture of what really is happening underground. Daniel Stephen Wamriew, a PhD candidate at the Skolkovo Institute of Science and Technology (Skoltech) in Moscow, said he believes this can change with advances in artificial intelligence and machine learning that can enhance accuracy in determining the location of a microseismic event while obtaining stable source mechanism solutions, all in real time. Wamriew presented his research at the 2020 SPE Russian Petroleum Technology Conference in Moscow in October in paper SPE 201925, “Deep Neural Network for Real-Time Location and Moment Tensor Inversion of Borehole Microseismic Events Induced by Hydraulic Fracturing.” The paper’s coauthors included Marwan Charara, Aramco Research Center, and Evgenii Maltsev, Skolkovo Institute of Science and Technology. Skoltech is a private institute established in 2011 as part of a multiyear partnership with the Massachusetts Institute of Technology. “People in the field mainly want to know if they created more fractures and if the fractures are connected,” Wamriew explained in a recent interview with JPT. “So, we need to know where exactly the fractures are, and we need to know the orientation (the source mechanism).” It Starts With Data “Usually, when you do hydraulic fracturing, a lot of data comes in,” Wamriew said. “It is not easy to analyze this data manually because you have to choose what part of the data you deal with, and, in doing that, you might leave out some necessary data that the human eye has missed.” To solve this problem, Wamriew proposes feeding microseismic data gathered during a fracturing job into a convolutional neural network (CNN) that he is constructing (Fig. 1). Humans discard nothing. Wave signals from actual events along with noise of all kinds goes into a machine, and the CNN delivers valuable information to reservoir engineers who want to understand the likely SRV. Companies today can identify the location of microseismic events, even without the help of artificial intelligence—though the techniques are always open to refinement—but analyzing the orientation (and hence their understanding of whether and how the fractures are connected) is a difficult and often expensive task that is usually left undone. “Current source mechanism solutions are largely inconsistent,” Wamriew said. “One scientist collects data and performs the moment tensor inversion, and another does the same and gets different results, even if they both use the same algorithm. When we handle data manually, we choose the process, and, in doing so, we introduce errors at every step because we are truncating, rounding up, and rounding down. We end up with something far from reality.”

Healthcare ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 234 ◽  
Author(s):  
Hyun Yoo ◽  
Soyoung Han ◽  
Kyungyong Chung

Recently, a massive amount of big data of bioinformation is collected by sensor-based IoT devices. The collected data are also classified into different types of health big data in various techniques. A personalized analysis technique is a basis for judging the risk factors of personal cardiovascular disorders in real-time. The objective of this paper is to provide the model for the personalized heart condition classification in combination with the fast and effective preprocessing technique and deep neural network in order to process the real-time accumulated biosensor input data. The model can be useful to learn input data and develop an approximation function, and it can help users recognize risk situations. For the analysis of the pulse frequency, a fast Fourier transform is applied in preprocessing work. With the use of the frequency-by-frequency ratio data of the extracted power spectrum, data reduction is performed. To analyze the meanings of preprocessed data, a neural network algorithm is applied. In particular, a deep neural network is used to analyze and evaluate linear data. A deep neural network can make multiple layers and can establish an operation model of nodes with the use of gradient descent. The completed model was trained by classifying the ECG signals collected in advance into normal, control, and noise groups. Thereafter, the ECG signal input in real time through the trained deep neural network system was classified into normal, control, and noise. To evaluate the performance of the proposed model, this study utilized a ratio of data operation cost reduction and F-measure. As a result, with the use of fast Fourier transform and cumulative frequency percentage, the size of ECG reduced to 1:32. According to the analysis on the F-measure of the deep neural network, the model had 83.83% accuracy. Given the results, the modified deep neural network technique can reduce the size of big data in terms of computing work, and it is an effective system to reduce operation time.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2021 ◽  
pp. 1-1
Author(s):  
Duc M. Le ◽  
Max L. Greene ◽  
Wanjiku A. Makumi ◽  
Warren E. Dixon

Sign in / Sign up

Export Citation Format

Share Document