Improving Real-Time Drilling Data Quality Using Artificial Intelligence and Machine Learning Techniques

2021 ◽  
Author(s):  
S. H. Al Gharbi ◽  
A. A. Al-Majed ◽  
A. Abdulraheem ◽  
S. Patil ◽  
S. M. Elkatatny

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote areas and unconventional environments. This raised the complexity of drilling operations, which were already challenging and complex. To adapt, drilling companies expanded their use of the real-time operation center (RTOC) concept, in which real-time drilling data are transmitted from remote sites to companies’ headquarters. In RTOC, groups of subject matter experts monitor the drilling live and provide real-time advice to improve operations. With the increase of drilling operations, processing the volume of generated data is beyond a human's capability, limiting the RTOC impact on certain components of drilling operations. To overcome this limitation, artificial intelligence and machine learning (AI/ML) technologies were introduced to monitor and analyze the real-time drilling data, discover hidden patterns, and provide fast decision-support responses. AI/ML technologies are data-driven technologies, and their quality relies on the quality of the input data: if the quality of the input data is good, the generated output will be good; if not, the generated output will be bad. Unfortunately, due to the harsh environments of drilling sites and the transmission setups, not all of the drilling data is good, which negatively affects the AI/ML results. The objective of this paper is to utilize AI/ML technologies to improve the quality of real-time drilling data. The paper fed a large real-time drilling dataset, consisting of over 150,000 raw data points, into Artificial Neural Network (ANN), Support Vector Machine (SVM) and Decision Tree (DT) models. The models were trained on the valid and not-valid datapoints. The confusion matrix was used to evaluate the different AI/ML models including different internal architectures. Despite the slowness of ANN, it achieved the best result with an accuracy of 78%, compared to 73% and 41% for DT and SVM, respectively. The paper concludes by presenting a process for using AI technology to improve real-time drilling data quality. To the author's knowledge based on literature in the public domain, this paper is one of the first to compare the use of multiple AI/ML techniques for quality improvement of real-time drilling data. The paper provides a guide for improving the quality of real-time drilling data.

2022 ◽  
pp. 1-14
Author(s):  
Salem Al-Gharbi ◽  
Abdulaziz Al-Majed ◽  
Salaheldin Elkatatny ◽  
Abdulazeez Abdulraheem

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote environments conducting unconventional operations. In order to maintain safe, fast and more cost-effective operations, utilizing machine learning (ML) technologies has become a must. The harsh environments of drilling sites and the transmission setups, are negatively affecting the drilling data, leading to less than acceptable ML results. For that reason, big portion of ML development projects were actually spent on improving the data by data-quality experts. The objective of this paper is to evaluate the effectiveness of ML on improving the real-time drilling-data-quality and compare it to a human expert knowledge. To achieve that, two large real-time drilling datasets were used; one dataset was used to train three different ML techniques: artificial neural network (ANN), support vector machine (SVM) and decision tree (DT), the second dataset was used to evaluate it. The ML results were compared with the results of a real- time drilling data quality expert. Despite the complexity of ANN and good results in general, it achieved a relative root mean square error (RRMSE) of 2.83%, which was lower than DT and SVM technologies that achieved RRMSE of 0.35% and 0.48% respectively. The uniqueness of this work is in developing ML that simulates the improvement of drilling-data- quality by an expert. This research provides a guide for improving the quality of real-time drilling data.


2021 ◽  
Author(s):  
Temirlan Zhekenov ◽  
Artem Nechaev ◽  
Kamilla Chettykbayeva ◽  
Alexey Zinovyev ◽  
German Sardarov ◽  
...  

SUMMARY Researchers base their analysis on basic drilling parameters obtained during mud logging and demonstrate impressive results. However, due to limitations imposed by data quality often present during drilling, those solutions often tend to lose their stability and high levels of predictivity. In this work, the concept of hybrid modeling was introduced which allows to integrate the analytical correlations with algorithms of machine learning for obtaining stable solutions consistent from one data set to another.


2021 ◽  
Author(s):  
Andrew McDonald ◽  

Decades of subsurface exploration and characterisation have led to the collation and storage of large volumes of well related data. The amount of data gathered daily continues to grow rapidly as technology and recording methods improve. With the increasing adoption of machine learning techniques in the subsurface domain, it is essential that the quality of the input data is carefully considered when working with these tools. If the input data is of poor quality, the impact on precision and accuracy of the prediction can be significant. Consequently, this can impact key decisions about the future of a well or a field. This study focuses on well log data, which can be highly multi-dimensional, diverse and stored in a variety of file formats. Well log data exhibits key characteristics of Big Data: Volume, Variety, Velocity, Veracity and Value. Well data can include numeric values, text values, waveform data, image arrays, maps, volumes, etc. All of which can be indexed by time or depth in a regular or irregular way. A significant portion of time can be spent gathering data and quality checking it prior to carrying out petrophysical interpretations and applying machine learning models. Well log data can be affected by numerous issues causing a degradation in data quality. These include missing data - ranging from single data points to entire curves; noisy data from tool related issues; borehole washout; processing issues; incorrect environmental corrections; and mislabelled data. Having vast quantities of data does not mean it can all be passed into a machine learning algorithm with the expectation that the resultant prediction is fit for purpose. It is essential that the most important and relevant data is passed into the model through appropriate feature selection techniques. Not only does this improve the quality of the prediction, it also reduces computational time and can provide a better understanding of how the models reach their conclusion. This paper reviews data quality issues typically faced by petrophysicists when working with well log data and deploying machine learning models. First, an overview of machine learning and Big Data is covered in relation to petrophysical applications. Secondly, data quality issues commonly faced with well log data are discussed. Thirdly, methods are suggested on how to deal with data issues prior to modelling. Finally, multiple case studies are discussed covering the impacts of data quality on predictive capability.


2021 ◽  
Author(s):  
Francesco Battocchio ◽  
Jaijith Sreekantan ◽  
Arghad Arnaout ◽  
Abed Benaichouche ◽  
Juma Sulaiman Al Shamsi ◽  
...  

Abstract Drilling data quality is notoriously a challenge for any analytics application, due to complexity of the real-time data acquisition system which routinely generates: (i) Time related issues caused by irregular sampling, (ii) Channel related issues in terms of non-uniform names and units, missing or wrong values, and (iii) Depth related issues caused block position resets, and depth compensation (for floating rigs). On the other hand, artificial intelligence drilling applications typically require a consistent stream of high-quality data as an input for their algorithms, as well as for visualization. In this work we present an automated workflow enhanced by data driven techniques that resolves complex quality issues, harmonize sensor drilling data, and report the quality of the dataset to be used for advanced analytics. The approach proposes an automated data quality workflow which formalizes the characteristics, requirements and constraints of sensor data within the context of drilling operations. The workflow leverages machine learning algorithms, statistics, signal processing and rule-based engines for detection of data quality issues including error values, outliers, bias, drifts, noise, and missing values. Further, once data quality issues are classified, they are scored and treated on a context specific basis in order to recover the maximum volume of data while avoiding information loss. This results into a data quality and preparation engine that organizes drilling data for further advanced analytics, and reports the quality of the dataset through key performance indicators. This novel data processing workflow allowed to recover more than 90% of a drilling dataset made of 18 offshore wells, that otherwise could not be used for analytics. This was achieved by resolving specific issues including, resampling timeseries with gaps and different sampling rates, smart imputation of wrong/missing data while preserving consistency of dataset across all channels. Additional improvement would include recovering data values that felt outside a meaningful range because of sensor drifting or depth resets. The present work automates the end-to-end workflow for data quality control of drilling sensor data leveraging advanced Artificial Intelligence (AI) algorithms. It allows to detect and classify patterns of wrong/missing data, and to recover them through a context driven approach that prevents information loss. As a result, the maximum amount of data is recovered for artificial intelligence drilling applications. The workflow also enables optimal time synchronization of different sensors streaming data at different frequencies, within discontinuous time intervals.


2021 ◽  
Author(s):  
Vagif Suleymanov ◽  
Hany Gamal ◽  
Guenther Glatz ◽  
Salaheldin Elkatatny ◽  
Abdulazeez Abdulraheem

Abstract Acoustic data obtained from sonic logging tools plays an important role in formation evaluation. Given the associated costs, however, the industry clearly stands to benefit from cheaper technologies to obtain compressional and shear wave slowness data. Therefore, this paper delineates an alternative solution for the prediction of sonic log data by means of Machine Learning (ML). This study takes advantage of an adaptive neuro-fuzzy inference system (ANFIS) and support vector machine (SVM) ML techniques to predict compressional and shear wave slowness from drilling data only. In particular, the network is trained utilizing 2000 data points such as weight on bit (WOB), rate of penetration (ROP), standpipe pressure (SPP), torque (T), drill pipe rotation (RPM), and mud flow rate (GPM). Consequently, acoustic properties of the rock can be estimated solely from readily available parameters thereby saving both costs and time associated with sonic logs. The obtained results are promising and supportive of both ANFIS and SVM model as viable alternatives to obtain sonic data without the need for running sonic logs. The developed ANFIS model was able to predict compressional and shear wave slowness with correlation coefficients of 0.94 and 0.98 and average absolute percentage errors (AAPE) of 1.87% and 2.61%, respectively. Similarly, the SVM model predicted sonic logs with high accuracy yielding to correlation coefficients of more than 0.98 and AAPE of 0.74% and 0.84% for both compressional and shear logs, respectively. Once a network is trained, the approach naturally lends itself to be integrated as a real time service. This study outlines a novel and cost-effective solution to estimate rock compressional and shear-wave slowness solely from readily available drilling parameters. Importantly, the model has been verified for wells drilled in different formations with complex lithology substantiating the effectiveness of the approach.


2021 ◽  
Author(s):  
Gowri R ◽  
Rathipriya R

UNSTRUCTURED In the current pandemic, there is lack of medical care takers and physicians in hospitals and health centers. The patients other than COVID infected are also affected by this scenario. Besides, the hospitals are also not admitting the old age peoples, and they are scared to approach hospitals even for their basic health checkups. But, they have to be cared and monitored to avoid the risk factors like fall incidence which may cause fatal injury. In such a case, this paper focuses on the cloud based IoT gadget for early fall incidence prediction. It is machine learning based fall incidence prediction system for the old age patients. The approaches such as Logistic Regression, Naive Bayes, Stochastic Gradient Descent, Decision Tree, Random Forest, Support Vector Machines, K-Nearest Neighbor and ensemble learning boosting techniques, i.e., XGBoost are used for fall incidence prediction. The proposed approach is first tested on the benchmark activity sensor data with different features for training purpose. The real-time vital signs like heart rate, blood pressure are recorded and stored in cloud and the machine learning approaches are applied to it. Then tested on the real-time sensor data like heart rate and blood pressure data of geriatric patients to predict early fall.


2020 ◽  
Vol 143 (3) ◽  
Author(s):  
Abdulmalek Ahmed ◽  
Salaheldin Elkatatny ◽  
Abdulwahab Ali

Abstract Several correlations are available to determine the fracture pressure, a vital property of a well, which is essential in the design of the drilling operations and preventing problems. Some of these correlations are based on the rock and formation characteristics, and others are based on log data. In this study, five artificial intelligence (AI) techniques predicting fracture pressure were developed and compared with the existing empirical correlations to select the optimal model. Real-time data of surface drilling parameters from one well were obtained using real-time drilling sensors. The five employed methods of AI are functional networks (FN), artificial neural networks (ANN), support vector machine (SVM), radial basis function (RBF), and fuzzy logic (FL). More than 3990 datasets were used to build the five AI models by dividing the data into training and testing sets. A comparison between the results of the five AI techniques and the empirical fracture correlations, such as the Eaton model, Matthews and Kelly model, and Pennebaker model, was also performed. The results reveal that AI techniques outperform the three fracture pressure correlations based on their high accuracy, represented by the low average absolute percentage error (AAPE) and a high coefficient of determination (R2). Compared with empirical models, the AI techniques have the advantage of requiring less data, only surface drilling parameters, which can be conveniently obtained from any well. Additionally, a new fracture pressure correlation was developed based on ANN, which predicts the fracture pressure with high precision (R2 = 0.99 and AAPE = 0.094%).


Presently machine learning and artificial intelligence is playing one of the most important role in diagnose many genetic and non genetic disease. So that the rapid inventions in machine learning can save thousands of life’s as it can diagnose the early stage of many serious diseases. In this research the datasets for such diseases is studied and it will be analyzed that how such deep machine learning will impact to a human life. The problem with such methodology is that it is not possible to get accurate results in the initial stage of research. The reason is every human have different immunity power and stamina. There are many diagnostics center who are fully dependent on the equipments which are fully based on machine learning. In order to boost this process it is necessary to collect the real time patient’s data from different hospitals, states and countries. So that it will be beneficial for world wide.


2022 ◽  
pp. 1-22
Author(s):  
Salem Al-Gharbi ◽  
Abdulaziz Al-Majed ◽  
Abdulazeez Abdulraheem ◽  
Zeeshan Tariq ◽  
Mohamed Mahmoud

Abstract The age of easy oil is ending, the industry started drilling in remote unconventional conditions. To help produce safer, faster, and most effective operations, the utilization of artificial intelligence and machine learning (AI/ML) has become essential. Unfortunately, due to the harsh environments of drilling and the data-transmission setup, a significant amount of the real-time data could defect. The quality and effectiveness of AI/ML models are directly related to the quality of the input data; only if the input data are good, the AI/ML generated analytical and prediction models will be good. Improving the real-time data is therefore critical to the drilling industry. The objective of this paper is to propose an automated approach using eight statistical data-quality improvement algorithms on real-time drilling data. These techniques are Kalman filtering, moving average, kernel regression, median filter, exponential smoothing, lowess, wavelet filtering, and polynomial. A dataset of +150,000 rows is fed into the algorithms, and their customizable parameters are calibrated to achieve the best improvement result. An evaluation methodology is developed based on real-time drilling data characteristics to analyze the strengths and weaknesses of each algorithm were highlighted. Based on the evaluation criteria, the best results were achieved using the exponential smoothing, median filter, and moving average. Exponential smoothing and median filter techniques improved the quality of data by removing most of the invalid data points, the moving average removed more invalid data-points but trimmed the data range.


2012 ◽  
Vol 396 (1) ◽  
pp. 012019
Author(s):  
Hege Austrheim Erdal ◽  
Matthias Richther ◽  
Artur Szostak ◽  
Alberica Toia

Sign in / Sign up

Export Citation Format

Share Document