Quantifying PDC Bit Wear in Real-Time and Establishing an Effective Bit Pull Criterion Using Surface Sensors

2021 ◽  
Author(s):  
Ysabel Witt-Doerring ◽  
Paul Pastusek Pastusek ◽  
Pradeepkumar Ashok ◽  
Eric van Oort

Abstract It is useful during drilling operations to know when bit failure has occurred because this knowledge can be used to improve drilling performance and provides guidance on when to pull out of hole. This paper presents a simple polycrystalline diamond compact (PDC) bit wear indicator and an associated methodology to help quantify wear and failure using real-time surface sensor data and PDC dull images. The wear indicator is used to identify the point of failure, after which corresponding surface data and dull images can be used to infer the cause of failure. It links rotary speed (RPM) with rate of penetration (ROP) and weight-on-bit (WOB). The term incorporating RPM and ROP represents a "sliding distance", i.e. the number of revolutions required to drill a unit distance of formation, while the WOB represents the formation hardness or contact pressure applied by the formation. This PDC bit wear metric was applied and validated on a data set comprised of 51 lateral production hole bit runs on 9 wells. Surface electric drilling recorder (EDR) data alongside bit dull photos were used to interpret the relationship between the wear metric and observed PDC wear. All runs were in the same extremely hard (estimated 35 – 50 kpsi unconfined compressive strength) and abrasive shale formation. Sliding drilling time and off-bottom time were filtered from the data, and the median wear metric value for each stand was calculated versus measured hole depth while in rotary mode. The initial point in time when the bit fails was found to be most often a singular event, after which ROP never recovered. Once damaged, subsequent catastrophic bit failure generally occurred within drilling 1-2 stands. The rapid bit failure observed was attributed to the increased thermal loads seen at the wear flat of the PDC cutter, which accelerate diamond degradation. The wear metric more accurately identifies the point in time (stand being drilled) of failure than the ROP value by itself. Review of post-run PDC photos show that the final recorded wear metric value can be related to the observed severity of the PDC damage. This information was used to determine a pull criterion to reduce pulling bits that are damaged beyond repair (DBR) and reduce time spent beyond the effective end of life. Pulling bits before DBR status is reached and replacing them increases overall drilling performance. The presented wear metric is simple and cost-effective to implement, which is important to lower-cost land wells, and requires only real-time surface sensor data. It enables a targeted approach to analyzing PDC bit wear, optimizing drilling performance and establishing effective bit pull criteria.

Author(s):  
Sorin G. Teodorescu ◽  
Eric C. Sullivan ◽  
Paul E. Pastusek

Drilling operations represent a major cost in discovering and exploring new petroleum reserves. Poor drilling performance, for example low ROP, can lead to high cost per foot. In order to optimize the performance of drill bits, the dynamic behavior of the bit and the drillstring has to be monitored. In recent developments, we have deployed a sensor / data acquisition (DAQ) system that is mounted at the bit, which can monitor the behavior of the drill bit and dynamic dysfunctions associated with the operating parameters, different rock formations and rock/bit interactions. A modified shank accommodates the sensor / DAQ system. Its location was determined based on extensive analysis of the bit’s structural integrity. Initial tests verified the ability of the system to identify PDC bit dysfunctions, such as backward whirl — one of the most bit damaging events in the drilling operation. Placing a sensor system in the bit allows for accurate pattern recognition and severity determination in terms of dynamic dysfunctions of the bit and can aid in optimizing drilling parameters in pursuit of increased ROP and reduced drilling costs.


2020 ◽  
Vol 1 (1) ◽  
pp. 35-42
Author(s):  
Péter Ekler ◽  
Dániel Pásztor

Összefoglalás. A mesterséges intelligencia az elmúlt években hatalmas fejlődésen ment keresztül, melynek köszönhetően ma már rengeteg különböző szakterületen megtalálható valamilyen formában, rengeteg kutatás szerves részévé vált. Ez leginkább az egyre inkább fejlődő tanulóalgoritmusoknak, illetve a Big Data környezetnek köszönhető, mely óriási mennyiségű tanítóadatot képes szolgáltatni. A cikk célja, hogy összefoglalja a technológia jelenlegi állapotát. Ismertetésre kerül a mesterséges intelligencia történelme, az alkalmazási területek egy nagyobb része, melyek központi eleme a mesterséges intelligencia. Ezek mellett rámutat a mesterséges intelligencia különböző biztonsági réseire, illetve a kiberbiztonság területén való felhasználhatóságra. A cikk a jelenlegi mesterséges intelligencia alkalmazások egy szeletét mutatja be, melyek jól illusztrálják a széles felhasználási területet. Summary. In the past years artificial intelligence has seen several improvements, which drove its usage to grow in various different areas and became the focus of many researches. This can be attributed to improvements made in the learning algorithms and Big Data techniques, which can provide tremendous amount of training. The goal of this paper is to summarize the current state of artificial intelligence. We present its history, introduce the terminology used, and show technological areas using artificial intelligence as a core part of their applications. The paper also introduces the security concerns related to artificial intelligence solutions but also highlights how the technology can be used to enhance security in different applications. Finally, we present future opportunities and possible improvements. The paper shows some general artificial intelligence applications that demonstrate the wide range usage of the technology. Many applications are built around artificial intelligence technologies and there are many services that a developer can use to achieve intelligent behavior. The foundation of different approaches is a well-designed learning algorithm, while the key to every learning algorithm is the quality of the data set that is used during the learning phase. There are applications that focus on image processing like face detection or other gesture detection to identify a person. Other solutions compare signatures while others are for object or plate number detection (for example the automatic parking system of an office building). Artificial intelligence and accurate data handling can be also used for anomaly detection in a real time system. For example, there are ongoing researches for anomaly detection at the ZalaZone autonomous car test field based on the collected sensor data. There are also more general applications like user profiling and automatic content recommendation by using behavior analysis techniques. However, the artificial intelligence technology also has security risks needed to be eliminated before applying an application publicly. One concern is the generation of fake contents. These must be detected with other algorithms that focus on small but noticeable differences. It is also essential to protect the data which is used by the learning algorithm and protect the logic flow of the solution. Network security can help to protect these applications. Artificial intelligence can also help strengthen the security of a solution as it is able to detect network anomalies and signs of a security issue. Therefore, the technology is widely used in IT security to prevent different type of attacks. As different BigData technologies, computational power, and storage capacity increase over time, there is space for improved artificial intelligence solution that can learn from large and real time data sets. The advancements in sensors can also help to give more precise data for different solutions. Finally, advanced natural language processing can help with communication between humans and computer based solutions.


2021 ◽  
Author(s):  
Trieu Phat Luu ◽  
John A.R. Bomidi ◽  
Arturo Magana-Mora ◽  
Alawi Alalsayednassir ◽  
Guodong David Zhan

Abstract Drilling operations rely on learned expertise in monitoring the drilling performance data and the rock data to assess the dull condition of the drill bit. While human learning can subjectively pick up the indicators based on rig surface data streams, this information is highly convoluted with changes in rock and drilling data. Recent approaches for bit wear estimation also include model-based and traditional supervised machine learning methods, which are usually costly and time-consuming. In this study, we developed a bi-directional long short-term memory-based variational autoencoder (biLSTM-VAE) to project raw drilling data into a latent space in which the real-time bit-wear can be estimated. The proposed deep neural network was trained in an unsupervised manner, and the bit-wear estimation is demonstrated as an end-to-end process.


2019 ◽  
Vol 8 (4) ◽  
pp. 4531-4536

With a drastic change in climate continuously it is very harmful to the people who are living in the disaster-prone areas. In some areas the people are not warned for the consequences of coming specifically in their areas, they are told about the average temperature and humidity of the city while the humidity and temperature vary at different altitude and changes at short distances. The system is a very cost-effective and efficient method for controlling and monitoring the weather, and it sends the data to the cloud so that it can be visible anywhere through internet. The temperature, humidity, and pressure play a significant role in different fields like agricultural, industrial and Logistical Field. Weather forecast is necessary for the growth and development of these industries. The Internet of Things (IoT) is the technology used in developing the proposed system, which is an efficient and advanced method for connecting the sensors to the cloud which can store real-time sensor data and connect the entire world of things in a network. Here things might be anything like electronic gadgets, sensors, and automotive electronic equipment. The system deals with controlling and monitoring the environmental conditions like Temperature, Pressure, Smoke, Relative humidity level and various other gases with sensors and sends the information to the cloud and then plot the sensor data in graphical form. An Intelligent prediction is to be done using machine learning. Machine learning is a branch of Artificial Intelligence (AI) which is a compelling method of Analyzing and predicting the given data-set. The data collected will be analyzed continuously. The real-time data which has to be sent through the sensor can be accessible throughout the world using the internet


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2021 ◽  
pp. 158-166
Author(s):  
Noah Balestra ◽  
Gaurav Sharma ◽  
Linda M. Riek ◽  
Ania Busza

<b><i>Background:</i></b> Prior studies suggest that participation in rehabilitation exercises improves motor function poststroke; however, studies on optimal exercise dose and timing have been limited by the technical challenge of quantifying exercise activities over multiple days. <b><i>Objectives:</i></b> The objectives of this study were to assess the feasibility of using body-worn sensors to track rehabilitation exercises in the inpatient setting and investigate which recording parameters and data analysis strategies are sufficient for accurately identifying and counting exercise repetitions. <b><i>Methods:</i></b> MC10 BioStampRC® sensors were used to measure accelerometer and gyroscope data from upper extremities of healthy controls (<i>n</i> = 13) and individuals with upper extremity weakness due to recent stroke (<i>n</i> = 13) while the subjects performed 3 preselected arm exercises. Sensor data were then labeled by exercise type and this labeled data set was used to train a machine learning classification algorithm for identifying exercise type. The machine learning algorithm and a peak-finding algorithm were used to count exercise repetitions in non-labeled data sets. <b><i>Results:</i></b> We achieved a repetition counting accuracy of 95.6% overall, and 95.0% in patients with upper extremity weakness due to stroke when using both accelerometer and gyroscope data. Accuracy was decreased when using fewer sensors or using accelerometer data alone. <b><i>Conclusions:</i></b> Our exploratory study suggests that body-worn sensor systems are technically feasible, well tolerated in subjects with recent stroke, and may ultimately be useful for developing a system to measure total exercise “dose” in poststroke patients during clinical rehabilitation or clinical trials.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2532
Author(s):  
Encarna Quesada ◽  
Juan J. Cuadrado-Gallego ◽  
Miguel Ángel Patricio ◽  
Luis Usero

Anomaly Detection research is focused on the development and application of methods that allow for the identification of data that are different enough—compared with the rest of the data set that is being analyzed—and considered anomalies (or, as they are more commonly called, outliers). These values mainly originate from two sources: they may be errors introduced during the collection or handling of the data, or they can be correct, but very different from the rest of the values. It is essential to correctly identify each type as, in the first case, they must be removed from the data set but, in the second case, they must be carefully analyzed and taken into account. The correct selection and use of the model to be applied to a specific problem is fundamental for the success of the anomaly detection study and, in many cases, the use of only one model cannot provide sufficient results, which can be only reached by using a mixture model resulting from the integration of existing and/or ad hoc-developed models. This is the kind of model that is developed and applied to solve the problem presented in this paper. This study deals with the definition and application of an anomaly detection model that combines statistical models and a new method defined by the authors, the Local Transilience Outlier Identification Method, in order to improve the identification of outliers in the sensor-obtained values of variables that affect the operations of wind tunnels. The correct detection of outliers for the variables involved in wind tunnel operations is very important for the industrial ventilation systems industry, especially for vertical wind tunnels, which are used as training facilities for indoor skydiving, as the incorrect performance of such devices may put human lives at risk. In consequence, the use of the presented model for outlier detection may have a high impact in this industrial sector. In this research work, a proof-of-concept is carried out using data from a real installation, in order to test the proposed anomaly analysis method and its application to control the correct performance of wind tunnels.


Author(s):  
D Spallarossa ◽  
M Cattaneo ◽  
D Scafidi ◽  
M Michele ◽  
L Chiaraluce ◽  
...  

Summary The 2016–17 central Italy earthquake sequence began with the first mainshock near the town of Amatrice on August 24 (MW 6.0), and was followed by two subsequent large events near Visso on October 26 (MW 5.9) and Norcia on October 30 (MW 6.5), plus a cluster of 4 events with MW &gt; 5.0 within few hours on January 18, 2017. The affected area had been monitored before the sequence started by the permanent Italian National Seismic Network (RSNC), and was enhanced during the sequence by temporary stations deployed by the National Institute of Geophysics and Volcanology and the British Geological Survey. By the middle of September, there was a dense network of 155 stations, with a mean separation in the epicentral area of 6–10 km, comparable to the most likely earthquake depth range in the region. This network configuration was kept stable for an entire year, producing 2.5 TB of continuous waveform recordings. Here we describe how this data was used to develop a large and comprehensive earthquake catalogue using the Complete Automatic Seismic Processor (CASP) procedure. This procedure detected more than 450,000 events in the year following the first mainshock, and determined their phase arrival times through an advanced picker engine (RSNI-Picker2), producing a set of about 7 million P- and 10 million S-wave arrival times. These were then used to locate the events using a non-linear location (NLL) algorithm, a 1D velocity model calibrated for the area, and station corrections and then to compute their local magnitudes (ML). The procedure was validated by comparison of the derived data for phase picks and earthquake parameters with a handpicked reference catalogue (hereinafter referred to as ‘RefCat’). The automated procedure takes less than 12 hours on an Intel Core-i7 workstation to analyse the primary waveform data and to detect and locate 3000 events on the most seismically active day of the sequence. This proves the concept that the CASP algorithm can provide effectively real-time data for input into daily operational earthquake forecasts, The results show that there have been significant improvements compared to RefCat obtained in the same period using manual phase picks. The number of detected and located events is higher (from 84,401 to 450,000), the magnitude of completeness is lower (from ML 1.4 to 0.6), and also the number of phase picks is greater with an average number of 72 picked arrival for a ML = 1.4 compared with 30 phases for RefCat using manual phase picking. These propagate into formal uncertainties of ± 0.9km in epicentral location and ± 1.5km in depth for the enhanced catalogue for the vast majority of the events. Together, these provide a significant improvement in the resolution of fine structures such as local planar structures and clusters, in particular the identification of shallow events occurring in parts of the crust previously thought to be inactive. The lower completeness magnitude provides a rich data set for development and testing of analysis techniques of seismic sequences evolution, including real-time, operational monitoring of b-value, time-dependent hazard evaluation and aftershock forecasting.


AI ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 48-70
Author(s):  
Wei Ming Tan ◽  
T. Hui Teo

Prognostic techniques attempt to predict the Remaining Useful Life (RUL) of a subsystem or a component. Such techniques often use sensor data which are periodically measured and recorded into a time series data set. Such multivariate data sets form complex and non-linear inter-dependencies through recorded time steps and between sensors. Many current existing algorithms for prognostic purposes starts to explore Deep Neural Network (DNN) and its effectiveness in the field. Although Deep Learning (DL) techniques outperform the traditional prognostic algorithms, the networks are generally complex to deploy or train. This paper proposes a Multi-variable Time Series (MTS) focused approach to prognostics that implements a lightweight Convolutional Neural Network (CNN) with attention mechanism. The convolution filters work to extract the abstract temporal patterns from the multiple time series, while the attention mechanisms review the information across the time axis and select the relevant information. The results suggest that the proposed method not only produces a superior accuracy of RUL estimation but it also trains many folds faster than the reported works. The superiority of deploying the network is also demonstrated on a lightweight hardware platform by not just being much compact, but also more efficient for the resource restricted environment.


Sign in / Sign up

Export Citation Format

Share Document