Fracture Height Prediction Model Utilizing Openhole Logs, Mechanical Models, and Temperature Cooldown Analysis with Machine Learning Algorithms

2021 ◽  
Author(s):  
Abdul Muqtadir Khan ◽  
Abdullah BinZiad ◽  
Abdullah Al Subaii ◽  
Denis Bannikov ◽  
Maksim Ponomarev ◽  
...  

Abstract Vertical wells require diagnostic techniques after minifrac pumping to interpret fracture height growth. This interpretation provides vital input to hydraulic fracturing redesign workflows. The temperature log is the most widely used technique to determine fracture height through cooldown analysis. A data science approach is proposed to leverage available measurements, automate the interpretation process, and enhance operational efficiency while keeping confidence in the fracturing design. Data from 55 wells were ingested to establish proof of concept.The selected geomechanical rock texture parameters were based on the fracturing theory of net-pressure-controlled height growth. Interpreted fracture height from input temperature cooldown analysis was merged with the structured dataset. The dataset was constructed at a high vertical depth of resolution of 0.5 to 1 ft. Openhole log data such as gamma-ray and bulk density helped to characterize the rock type, and calculated mechanical properties from acoustic logs such as in-situ stress and Young's modulus characterize the fracture geometry development. Moreover, injection rate, volume, and net pressure during the calibration treatment affect the fracture height growth. A machine learning (ML) workflow was applied to multiple openhole log parameters, which were integrated with minifrac calibration parameters along with the varying depth of the reservoir. The 55 wells datasets with a cumulative 120,000 rows were divided into training and testing with a ratio of 80:20. A comparative algorithm study was conducted on the test set with nine algorithms, and CatBoost showed the best results with an RMSE of 4.13 followed by Random Forest with 4.25. CatBoost models utilize both categorical and numerical data. Stress, gamma-ray, and bulk density parameters affected the fracture height analyzed from the post-fracturing temperature logs. Following successful implementation in the pilot phase, the model can be extended to horizontal wells to validate predictions from commercial simulators where stress calculations were unreliable or where stress did not entirely reflect changes in rock type. By coupling the geometry measurement technology with data analysis, a useful automated model was successfully developed to enhance operational efficiency without compromising any part of the workflow. The advanced algorithm can be used in any field where precise fracture placement of a hydraulic fracture contributes directly to production potential. Also, the model can play a critical role in cube development to optimize lateral landing and lateral density for exploration fields.

2021 ◽  
Author(s):  
Lianteng Song ◽  
◽  
Zhonghua Liu ◽  
Chaoliu Li ◽  
Congqian Ning ◽  
...  

Geomechanical properties are essential for safe drilling, successful completion, and exploration of both conven-tional and unconventional reservoirs, e.g. deep shale gas and shale oil. Typically, these properties could be calcu-lated from sonic logs. However, in shale reservoirs, it is time-consuming and challenging to obtain reliable log-ging data due to borehole complexity and lacking of in-formation, which often results in log deficiency and high recovery cost of incomplete datasets. In this work, we propose the bidirectional long short-term memory (BiL-STM) which is a supervised neural network algorithm that has been widely used in sequential data-based pre-diction to estimate geomechanical parameters. The pre-diction from log data can be conducted from two differ-ent aspects. 1) Single-Well prediction, the log data from a single well is divided into training data and testing data for cross validation; 2) Cross-Well prediction, a group of wells from the same geographical region are divided into training set and testing set for cross validation, as well. The logs used in this work were collected from 11 wells from Jimusaer Shale, which includes gamma ray, bulk density, resistivity, and etc. We employed 5 vari-ous machine learning algorithms for comparison, among which BiLSTM showed the best performance with an R-squared of more than 90% and an RMSE of less than 10. The predicted results can be directly used to calcu-late geomechanical properties, of which accuracy is also improved in contrast to conventional methods.


Author(s):  
Balaji Rajagopalan ◽  
Ravi Krovi

Data mining is the process of sifting through the mass of organizational (internal and external) data to identify patterns critical for decision support. Successful implementation of the data mining effort requires a careful assessment of the various tools and algorithms available. The basic premise of this study is that machine-learning algorithms, which are assumption free, should outperform their traditional counterparts when mining business databases. The objective of this study is to test this proposition by investigating the performance of the algorithms for several scenarios. The scenarios are based on simulations designed to reflect the extent to which typical statistical assumptions are violated in the business domain. The results of the computational experiments support the proposition that machine learning algorithms generally outperform their statistical counterparts under certain conditions. These can be used as prescriptive guidelines for the applicability of data mining techniques.


2021 ◽  
Author(s):  
Mattia Martinelli ◽  
Ivo Colombo ◽  
Eliana Rosa Russo

Abstract The aim of this work is the development of a fast and reliable method for geomechanical parameters evaluation while drilling using surface logging data. Geomechanical parameters are usually evaluated from cores or sonic logs, which are typically expensive and sometimes difficult to obtain. A novel approach is here proposed, where machine learning algorithms are used to calculate the Young's Modulus from drilling parameters and the gamma ray log. The proposed method combines typical mud logging drilling data (ROP, RPM, Torque, Flow measurements, WOB and SPP), XRF data and well log data (Sonic logs, Bulk Density, Gamma Ray) with several machine learning techniques. The models were trained and tested on data coming from three wells drilled in the same basin in Kuwait, in the same geological units but in different reservoirs. Sonic logs and bulk density are used to evaluate the geomechanical parameters (e.g. Young's Modulus) and to train the model. The training phase and the hyperparameter tuning were performed using data coming from a single well. The model was then tested against previously unseen data coming from the other two wells. The trained model is able to predict the Young's modulus in the test wells with a root mean squared error around 12 GPa. The example here provided demonstrates that a model trained with drilling parameters and gamma ray coming from one well is able to predict the Young Modulus of different wells in the same basin. These outcomes highlight the potentiality of this procedure and point out several implications for the reservoir characterization. Indeed, once the model has been trained, it is possible to predict the Young's Modulus in different wells of the same basin using only surface logging data.


2021 ◽  
Author(s):  
Azusa Takeishi ◽  
Chien Wang

<p>Processes that convert small cloud droplets, on the order of tens of micrometers, into raindrops, on the order of millimeters, consist of condensational growth and collision-coalescence: the former is efficient for small droplets, whereas the latter becomes predominant later in the growth stage when droplets are larger than about 30 micrometers. Thus, how droplets can quickly grow to 30 micrometers solely by inefficient condensation has been a topic of discussion for a long time. As a result, many parameterizations used in current models that cannot directly resolve these processes are actually based on empirical estimates. Recently, some studies have shown the impact of turbulences that can enhance collision-coalescence for droplets smaller than 30 micrometers, explaining the fast growth of cloud droplets into raindrops as observed. We have implemented these new equations of collision-coalescence in a parcel model where the activation of aerosol particles and their condensational growth are also explicitly calculated based on physical equations across numerous size bins. After the successful implementation of these processes, we have then applied machine-learning algorithms of training a machine to mimic the behavior of the explicit physical model to model-simulated mass and number of raindrops alongside ten dynamical and microphysical variables as input features. The machine-learned results are also compared with those from existing parameterizations frequently used in regional and climate models. Furthermore, the use of this new machine-learning-based parameterization, covering processes from aerosol activation to the formation of raindrops, in a regional model will be discussed.</p>


2018 ◽  
Vol 7 (3.34) ◽  
pp. 323
Author(s):  
S Muthuselvan ◽  
S Rajapraksh ◽  
K Somasundaram ◽  
K Karthik

Prediction of the disease in the human being is the very long and difficult process in early days. Now a days, computer aided diagnosis is the important role in the medical industry for predicting, analyzing and storing medical information with the images. In this paper will discuss and classify the liver patients with the help of the liver patient dataset with the help of the machine learning algorithms. WEKA is the software used here for implement the some of the classification algorithms with the data selected from the liver disease dataset. After the successful implementation of the all the algorithms, the best algorithms selected from the output of the all the algorithms execution. 


Author(s):  
Pankaj Khurana ◽  
Rajeev Varshney

The rise in the volume, variety and complexity of data in healthcare has made it as a fertile-bed for Artificial intelligence (AI) and Machine Learning (ML). Several types of AI are already being employed by healthcare providers and life sciences companies. The review summarises a classical machine learning cycle, different machine learning algorithms; different data analytical approaches and successful implementation in haematology. Although there are many instances where AI has been found to be great tool that can augment the clinician’s ability to provide better health outcomes, implementation factors need to be put in place to ascertain large-scale acceptance and popularity.


2020 ◽  
Vol 50 ◽  
pp. 2060010
Author(s):  
Matthew Durbin ◽  
Christopher Balbier ◽  
Azaree Lintereur

Directional detection plays an important role in the search for rogue or illicit radioactive sources but is often complicated by Poisson statistics, large distances, and various sources of noise. A currently used directional detection method involves extracting the angular information of a source’s location from a cluster of detectors in a set geometry. Traditional algorithms designed to process detected data typically involve performing a least squares assessment against a database prepopulated with detector responses of known source locations. These algorithms perform best when the standoff distance is like that available in the prepopulated database; they lose accuracy when distances and environments of measurements are not well represented in the database. Analysis of highly variable and noisy data can often benefit from the robustness of machine learning, which has been implemented in applications such as isotope identification and radium mapping. This work aims to investigate the utility of machine learning algorithms capable of analyzing data with large amounts of statistical variability to improve directional location capabilities for large area source searches. Preliminary results with a fully connected residual neural network include the successful source location for simulated search scenarios to within 1 degree in 24% of trials. The same simulated data analyzed using traditional methods resulted in 1 degree location for 11% of trials. The traditional and neural network algorithms were compared in terms of error and accuracy as well performance as a function of distance for a simulated dataset of source searches. Results indicate that more robust algorithms, such as the implemented neural network, can improve system-inherent accuracy and over all directional capabilities.


2021 ◽  
Author(s):  
Ardiansyah Negara ◽  
Arturo Magana-Mora ◽  
Khaqan Khan ◽  
Johannes Vossen ◽  
Guodong David Zhan ◽  
...  

Abstract This study presents a data-driven approach using machine learning algorithms to provide predicted analogues in the absence of acoustic logs, especially while drilling. Acoustic logs are commonly used to derive rock mechanical properties; however, these data are not always available. Well logging data (wireline/logging while drilling - LWD), such as gamma ray, density, neutron porosity, and resistivity, are used as input parameters to develop the data-driven rock mechanical models. In addition to the logging data, real-time drilling data (i.e., weight-on-bit, rotation speed, torque, rate of penetration, flowrate, and standpipe pressure) are used to derive the model. In the data preprocessing stage, we labeled drilling and well logging data based on formation tops in the drilling plan and performed data cleansing to remove outliers. A set of field data from different wells across the same formation is used to build and train the predictive models. We computed feature importance to rank the data based on the relevance to predict acoustic logs and applied feature selection techniques to remove redundant features that may unnecessarily require a more complex model. An additional feature, mechanical specific energy, is also generated from drilling real-time data to improve the prediction accuracy. A number of scenarios showing a comparison of different predictive models were studied, and the results demonstrated that adding drilling data and/or feature engineering into the model could improve the accuracy of the models.


Sign in / Sign up

Export Citation Format

Share Document