A Deep Learning Model to Intelligently Identify the Working Status of Screw Pumps for Oil Well Lifting

2021 ◽  
Author(s):  
Zhen Wang ◽  
Yeliang Dong ◽  
Xin Zheng ◽  
Xiang Wang ◽  
Peng Gao ◽  
...  

Abstract Screw pumps have been widely used in many oilfields to lift the oil from wellbore to ground. The pump failure and delayed repair means well shut and production loss. A deep learning model is constructed to quickly identify the working status and accurately diagnose the failure types of the screw pumps, which can help the workers always get the information and give a fast repair. Firstly, running parameters of the screw pump, such as electric current, voltage, and instantaneous rate of flow, are obtained through the Real-time Data Acquisition System. Then the correlations between values or trends of those parameters and working status of the screw pump are calculated or analyzed. Results show that there is a good correlation between the current characteristics and various working status of screw pump. Current data at different times are expressed in polar coordinates, with the polar diameter representing the current value and the polar angle representing the time. The current-time curves of massive oil wells are then plotted in images with fixed resolution and divided into nine different groups to correspond to nine frequent working status of screw pump. A convolutional neural network (CNN) model is initialized, with the current-time curve as its input and the number codes representing working status as its output. Images mentioned above are used to train the CNN model, and the model parameters, such as the number of convolution layers, the size of convolution kernels and the activation function are optimized to minimize the training losses, which are the differences between the output codes and the right codes corresponding to the images. Finally, a robust CNN model is established, which can quickly and accurately judge the working state of the screw pump through electric current data. Based on this model, a software system connected with the oilfield database is developed, which can obtain the running parameters of the screw pumps in real time, identify their working states, judge the fault types of the abnormal situations, give alarms, and put forward solution suggestions. The system has now been widely used in Shengli Oilfield, which can help staff know the working conditions and fault types of abnormal wells in real time, speed up the maintenance progress, shorten the pump shutdown time and improve the production.

Author(s):  
Tossaporn Santad ◽  
Piyarat Silapasupphakornwong ◽  
Worawat Choensawat ◽  
Kingkarn Sookhanaphibarn

2021 ◽  
Author(s):  
Gaurav Chachra ◽  
Qingkai Kong ◽  
Jim Huang ◽  
Srujay Korlakunta ◽  
Jennifer Grannen ◽  
...  

Abstract After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important locations on the images that facilitate the decision.


2021 ◽  
Author(s):  
Jannes Münchmeyer ◽  
Dino Bindi ◽  
Ulf Leser ◽  
Frederik Tilmann

<p><span>The estimation of earthquake source parameters, in particular magnitude and location, in real time is one of the key tasks for earthquake early warning and rapid response. In recent years, several publications introduced deep learning approaches for these fast assessment tasks. Deep learning is well suited for these tasks, as it can work directly on waveforms and </span><span>can</span><span> learn features and their relation from data.</span></p><p><span>A drawback of deep learning models is their lack of interpretability, i.e., it is usually unknown what reasoning the network uses. Due to this issue, it is also hard to estimate how the model will handle new data whose properties differ in some aspects from the training set, for example earthquakes in previously seismically quite regions. The discussions of previous studies usually focused on the average performance of models and did not consider this point in any detail.</span></p><p><span>Here we analyze a deep learning model for real time magnitude and location estimation through targeted experiments and a qualitative error analysis. We conduct our analysis on three large scale regional data sets from regions with diverse seismotectonic settings and network properties: Italy and Japan with dense networks </span><span>(station spacing down to 10 km)</span><span> of strong motion sensors, and North Chile with a sparser network </span><span>(station spacing around 40 km) </span><span>of broadband stations. </span></p><p><span>We obtained several key insights. First, the deep learning model does not seem to follow the classical approaches for magnitude and location estimation. For magnitude, one would classically expect the model to estimate attenuation, but the network rather seems to focus its attention on the spectral composition of the waveforms. For location, one would expect a triangulation approach, but our experiments instead show indications of a fingerprinting approach. </span>Second, we can pinpoint the effect of training data size on model performance. For example, a four times larger training set reduces average errors for both magnitude and location prediction by more than half, and reduces the required time for real time assessment by a factor of four. <span>Third, the model fails for events with few similar training examples. For magnitude, this means that the largest event</span><span>s</span><span> are systematically underestimated. For location, events in regions with few events in the training set tend to get mislocated to regions with more training events. </span><span>These characteristics can have severe consequences in downstream tasks like early warning and need to be taken into account for future model development and evaluation.</span></p>


2021 ◽  
pp. 132-143
Author(s):  
Akihiro Sugiura ◽  
Yoshiki Itazu ◽  
Kunihiko Tanaka ◽  
Hiroki Takada

Critical Care ◽  
2019 ◽  
Vol 23 (1) ◽  
Author(s):  
Soo Yeon Kim ◽  
Saehoon Kim ◽  
Joongbum Cho ◽  
Young Suh Kim ◽  
In Suk Sol ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2556
Author(s):  
Liyang Wang ◽  
Yao Mu ◽  
Jing Zhao ◽  
Xiaoya Wang ◽  
Huilian Che

The clinical symptoms of prediabetes are mild and easy to overlook, but prediabetes may develop into diabetes if early intervention is not performed. In this study, a deep learning model—referred to as IGRNet—is developed to effectively detect and diagnose prediabetes in a non-invasive, real-time manner using a 12-lead electrocardiogram (ECG) lasting 5 s. After searching for an appropriate activation function, we compared two mainstream deep neural networks (AlexNet and GoogLeNet) and three traditional machine learning algorithms to verify the superiority of our method. The diagnostic accuracy of IGRNet is 0.781, and the area under the receiver operating characteristic curve (AUC) is 0.777 after testing on the independent test set including mixed group. Furthermore, the accuracy and AUC are 0.856 and 0.825, respectively, in the normal-weight-range test set. The experimental results indicate that IGRNet diagnoses prediabetes with high accuracy using ECGs, outperforming existing other machine learning methods; this suggests its potential for application in clinical practice as a non-invasive, prediabetes diagnosis technology.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1664
Author(s):  
Yoon-Ki Kim ◽  
Yongsung Kim

Recently, as the amount of real-time video streaming data has increased, distributed parallel processing systems have rapidly evolved to process large-scale data. In addition, with an increase in the scale of computing resources constituting the distributed parallel processing system, the orchestration of technology has become crucial for proper management of computing resources, in terms of allocating computing resources, setting up a programming environment, and deploying user applications. In this paper, we present a new distributed parallel processing platform for real-time large-scale image processing based on deep learning model inference, called DiPLIP. It provides a scheme for large-scale real-time image inference using buffer layer and a scalable parallel processing environment according to the size of the stream image. It allows users to easily process trained deep learning models for processing real-time images in a distributed parallel processing environment at high speeds, through the distribution of the virtual machine container.


2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Parth K. Shah ◽  
Jennifer C. Ginestra ◽  
Lyle H. Ungar ◽  
Paul Junker ◽  
Jeff I. Rohrbach ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document