Modeling and Predicting Performance of Autonomous Rotary Drilling System Using Machine Learning Techniques

2021 ◽  
Author(s):  
Kingsley Amadi ◽  
Ibiye Iyalla ◽  
Radhakrishna Prabhu

Abstract This paper presents the development of predictive optimization models for autonomous rotary drilling systems where emphasis is placed on the shift from human (manual) operation as the driving force for drill rate performance to Quantitative Real-time Prediction (QRP) using machine learning. The methodology employed in this work uses real-time offset drilling data with machine learning models to accurately predict Rate of Penetration (ROP) and determine optimum operating parameters for improved drilling performance. Two optimization models (physics-based and energy conservation) were tested using Artificial Neutral Network (ANN) algorithm. Results of analysis using the model performance assessment criteria; correlation coefficient (R2) and Root Mean Square Error (RMSE), show that drill rate is non-linear in nature and the machine learning model (ANN) using energy conservation is most accurate for predicting ROP due to its ability in establishing a functional feature vector based on learning from past events.

Author(s):  
Fahad Kamran ◽  
Kathryn Harrold ◽  
Jonathan Zwier ◽  
Wendy Carender ◽  
Tian Bao ◽  
...  

Abstract Background Recently, machine learning techniques have been applied to data collected from inertial measurement units to automatically assess balance, but rely on hand-engineered features. We explore the utility of machine learning to automatically extract important features from inertial measurement unit data for balance assessment. Findings Ten participants with balance concerns performed multiple balance exercises in a laboratory setting while wearing an inertial measurement unit on their lower back. Physical therapists watched video recordings of participants performing the exercises and rated balance on a 5-point scale. We trained machine learning models using different representations of the unprocessed inertial measurement unit data to estimate physical therapist ratings. On a held-out test set, we compared these learned models to one another, to participants’ self-assessments of balance, and to models trained using hand-engineered features. Utilizing the unprocessed kinematic data from the inertial measurement unit provided significant improvements over both self-assessments and models using hand-engineered features (AUROC of 0.806 vs. 0.768, 0.665). Conclusions Unprocessed data from an inertial measurement unit used as input to a machine learning model produced accurate estimates of balance performance. The ability to learn from unprocessed data presents a potentially generalizable approach for assessing balance without the need for labor-intensive feature engineering, while maintaining comparable model performance.


2021 ◽  
Author(s):  
K. Emma Knowland ◽  
Christoph Keller ◽  
Krzysztof Wargan ◽  
Brad Weir ◽  
Pamela Wales ◽  
...  

<p>NASA's Global Modeling and Assimilation Office (GMAO) produces high-resolution global forecasts for weather, aerosols, and air quality. The NASA Global Earth Observing System (GEOS) model has been expanded to provide global near-real-time 5-day forecasts of atmospheric composition at unprecedented horizontal resolution of 0.25 degrees (~25 km). This composition forecast system (GEOS-CF) combines the operational GEOS weather forecasting model with the state-of-the-science GEOS-Chem chemistry module (version 12) to provide detailed analysis of a wide range of air pollutants such as ozone, carbon monoxide, nitrogen oxides, and fine particulate matter (PM2.5). Satellite observations are assimilated into the system for improved representation of weather and smoke. The assimilation system is being expanded to include chemically reactive trace gases. We discuss current capabilities of the GEOS Constituent Data Assimilation System (CoDAS) to improve atmospheric composition modeling and possible future directions, notably incorporating new observations (TROPOMI, geostationary satellites) and machine learning techniques. We show how machine learning techniques can be used to correct for sub-grid-scale variability, which further improves model estimates at a given observation site.</p>


2021 ◽  
Author(s):  
Aurore Lafond ◽  
Maurice Ringer ◽  
Florian Le Blay ◽  
Jiaxu Liu ◽  
Ekaterina Millan ◽  
...  

Abstract Abnormal surface pressure is typically the first indicator of a number of problematic events, including kicks, losses, washouts and stuck pipe. These events account for 60–70% of all drilling-related nonproductive time, so their early and accurate detection has the potential to save the industry billions of dollars. Detecting these events today requires an expert user watching multiple curves, which can be costly, and subject to human errors. The solution presented in this paper is aiming at augmenting traditional models with new machine learning techniques, which enable to detect these events automatically and help the monitoring of the drilling well. Today’s real-time monitoring systems employ complex physical models to estimate surface standpipe pressure while drilling. These require many inputs and are difficult to calibrate. Machine learning is an alternative method to predict pump pressure, but this alone needs significant labelled training data, which is often lacking in the drilling world. The new system combines these approaches: a machine learning framework is used to enable automated learning while the physical models work to compensate any gaps in the training data. The system uses only standard surface measurements, is fully automated, and is continuously retrained while drilling to ensure the most accurate pressure prediction. In addition, a stochastic (Bayesian) machine learning technique is used, which enables not only a prediction of the pressure, but also the uncertainty and confidence of this prediction. Last, the new system includes a data quality control workflow. It discards periods of low data quality for the pressure anomaly detection and enables to have a smarter real-time events analysis. The new system has been tested on historical wells using a new test and validation framework. The framework runs the system automatically on large volumes of both historical and simulated data, to enable cross-referencing the results with observations. In this paper, we show the results of the automated test framework as well as the capabilities of the new system in two specific case studies, one on land and another offshore. Moreover, large scale statistics enlighten the reliability and the efficiency of this new detection workflow. The new system builds on the trend in our industry to better capture and utilize digital data for optimizing drilling.


2021 ◽  
Author(s):  
Anton Gryzlov ◽  
Liliya Mironova ◽  
Sergey Safonov ◽  
Muhammad Arsalan

Abstract Modern challenges in reservoir management have recently faced new opportunities in production control and optimization strategies. These strategies in turn rely on the availability of monitoring equipment, which is used to obtain production rates in real-time with sufficient accuracy. In particular, a multiphase flow meter is a device for measuring the individual rates of oil, gas and water from a well in real-time without separating fluid phases. Currently, there are several technologies available on the market but multiphase flow meters generally incapable to handle all ranges of operating conditions with satisfactory accuracy in addition to being expensive to maintain. Virtual Flow Metering (VFM) is a mathematical technique for the indirect estimation of oil, gas and water flowrates produced from a well. This method uses more readily available data from conventional sensors, such as downhole pressure and temperature gauges, and calculates the multiphase rates by combining physical multiphase models, various measurement data and an optimization algorithm. In this work, a brief overview of the virtual metering methods is presented, which is followed by the application of several advanced machine-learning techniques for a specific case of multiphase production monitoring in a highly dynamic wellbore. The predictive capabilities of different types of machine learning instruments are explored using a model simulated production data. Also, the effect of measurement noise on the quality of estimates is considered. The presented results demonstrate that the data-driven methods are very capable to predict multiphase flow rates with sufficient accuracy and can be considered as a back-up solution for a conventional multiphase meter.


Author(s):  
Simen Eldevik ◽  
Stian Sætre ◽  
Erling Katla ◽  
Andreas B. Aardal

Abstract Operators of offshore floating drilling units have limited time to decide on whether a drilling operation can continue as planned or if it needs to be postponed or aborted due to oncoming bad weather. With day-rates of several hundred thousand USD, small delays in the original schedule might amass to considerable costs. On the other hand, pushing the limits of the load capacity of the riser-stack and wellhead may compromise the integrity of the well itself, and such a failure is not an option. Advanced simulation techniques may reduce uncertainty about how different weather scenarios influence the system’s integrity, and thus increase the acceptable weather window considerably. However, real-time simulations are often not feasible and the stochastic behavior of wave-loads make it difficult to simulate all relevant weather scenarios prior to the operation. This paper outlines and demonstrates an approach which utilizes probabilistic machine learning techniques to effectively reduce uncertainty. More specifically we use Gaussian process regression to enable fast approximation of the relevant structural response from complex simulations. The probabilistic nature of the method adds the benefit of an estimated uncertainty in the prediction which can be utilized to optimize how the initial set of relevant simulation scenarios should be selected, and to predict real-time estimates of the utilization and its uncertainty when combined with current weather forecasts. This enables operators to have an up-to-date forecast of the system’s utilization, as well as sufficient time to trigger additional scenario-specific simulation(s) to reduce the uncertainty of the current situation. As a result, it reduces unnecessary conservatism and gives clear decision support for critical situations.


Atmosphere ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 1303
Author(s):  
Wei-Ting Hung ◽  
Cheng-Hsuan (Sarah) Lu ◽  
Stefano Alessandrini ◽  
Rajesh Kumar ◽  
Chin-An Lin

In New York State (NYS), episodic high fine particulate matter (PM2.5) concentrations associated with aerosols originated from the Midwest, Mid-Atlantic, and Pacific Northwest states have been reported. In this study, machine learning techniques, including multiple linear regression (MLR) and artificial neural network (ANN), were used to estimate surface PM2.5 mass concentrations at air quality monitoring sites in NYS during the summers of 2016–2019. Various predictors were considered, including meteorological, aerosol, and geographic predictors. Vertical predictors, designed as the indicators of vertical mixing and aloft aerosols, were also applied. Overall, the ANN models performed better than the MLR models, and the application of vertical predictors generally improved the accuracy of PM2.5 estimation of the ANN models. The leave-one-out cross-validation results showed significant cross-site variations and were able to present the different predictor-PM2.5 correlations at the sites with different PM2.5 characteristics. In addition, a joint analysis of regression coefficients from the MLR model and variable importance from the ANN model provided insights into the contributions of selected predictors to PM2.5 concentrations. The improvements in model performance due to aloft aerosols were relatively minor, probably due to the limited cases of aloft aerosols in current datasets.


2021 ◽  
Vol 8 ◽  
Author(s):  
Daniele Roberto Giacobbe ◽  
Alessio Signori ◽  
Filippo Del Puente ◽  
Sara Mora ◽  
Luca Carmisciano ◽  
...  

Sepsis is a major cause of death worldwide. Over the past years, prediction of clinically relevant events through machine learning models has gained particular attention. In the present perspective, we provide a brief, clinician-oriented vision on the following relevant aspects concerning the use of machine learning predictive models for the early detection of sepsis in the daily practice: (i) the controversy of sepsis definition and its influence on the development of prediction models; (ii) the choice and availability of input features; (iii) the measure of the model performance, the output, and their usefulness in the clinical practice. The increasing involvement of artificial intelligence and machine learning in health care cannot be disregarded, despite important pitfalls that should be always carefully taken into consideration. In the long run, a rigorous multidisciplinary approach to enrich our understanding in the application of machine learning techniques for the early recognition of sepsis may show potential to augment medical decision-making when facing this heterogeneous and complex syndrome.


Sign in / Sign up

Export Citation Format

Share Document