Application of Machine Learning Algorithms for Managing Well Integrity in Gas Lift Wells

2021 ◽  
Author(s):  
Adel Mohamed Salem Ragab ◽  
Mostafa Sa’eed Yakoot ◽  
Omar Mahmoud

Abstract Well integrity (WI) impairments in oil and gas (O&G) wells are one of the most formidable challenges in the petroleum industry. Managing WI for different groups of well services necessitates precise assessment of risk level. When WI classification and risk assessment are performed using traditional methods such as spreadsheets, failures of well barriers will result in complicated and challenging WI management, especially in mature O&G fields. Industrial practices, then, started moving toward likelihood/ severity matrices which turned out later to be misleading in many cases due to possibility of having skewness in failure data. Developing a reliable model for classifying level of WI impairment is becoming more crucial for the industry. Artificial intelligence (AI) includes advanced algorithms that use machine learning (ML) and computing powers efficiently for predictive analytics. The main objective of this work is to develop ML models for the detection of integrity anomalies and early recognition of well failures. Most common ML algorithms in data science include; random forest, logistic regression, quadratic discriminant analysis, and boosting techniques. This model establishment comes after initial data gathering, pre-processing, and feature engineering. These models can iterate different failure scenarios considering all barrier elements that could contribute to the WI envelope. Thousands of WI data arrays can be literally collected and fed into ML models after being processed and structured properly. The new model presented in this paper can detect different WI anomalies and accurate analysis of failures can be achieved. This emphasizes that managing overall risks of WI failures is a robust and practical approach for direct implementation in mature fields. It also, creates additional enhancement for WI management. This perspective will improve efficiency of operations in addition to having the privilege of universality, where it can be applicable for different well groups. The rising wave of digitalization is anticipated to improve field operations, business performance, and production safety.

Author(s):  
E. B. Priyanka ◽  
S. Thangavel ◽  
D. Venkatesa Prabu

Big data and analytics may be new to some industries, but the oil and gas industry has long dealt with large quantities of data to make technical decisions. Oil producers can capture more detailed data in real-time at lower costs and from previously inaccessible areas, to improve oilfield and plant performance. Stream computing is a new way of analyzing high-frequency data for real-time complex-event-processing and scoring data against a physics-based or empirical model for predictive analytics, without having to store the data. Hadoop Map/Reduce and other NoSQL approaches are a new way of analyzing massive volumes of data used to support the reservoir, production, and facilities engineering. Hence, this chapter enumerates the routing organization of IoT with smart applications aggregating real-time oil pipeline sensor data as big data subjected to machine learning algorithms using the Hadoop platform.


Nafta-Gaz ◽  
2021 ◽  
Vol 77 (5) ◽  
pp. 283-292
Author(s):  
Tomasz Topór ◽  

The application of machine learning algorithms in petroleum geology has opened a new chapter in oil and gas exploration. Machine learning algorithms have been successfully used to predict crucial petrophysical properties when characterizing reservoirs. This study utilizes the concept of machine learning to predict permeability under confining stress conditions for samples from tight sandstone formations. The models were constructed using two machine learning algorithms of varying complexity (multiple linear regression [MLR] and random forests [RF]) and trained on a dataset that combined basic well information, basic petrophysical data, and rock type from a visual inspection of the core material. The RF algorithm underwent feature engineering to increase the number of predictors in the models. In order to check the training models’ robustness, 10-fold cross-validation was performed. The MLR and RF applications demonstrated that both algorithms can accurately predict permeability under constant confining pressure (R2 0.800 vs. 0.834). The RF accuracy was about 3% better than that of the MLR and about 6% better than the linear reference regression (LR) that utilized only porosity. Porosity was the most influential feature of the models’ performance. In the case of RF, the depth was also significant in the permeability predictions, which could be evidence of hidden interactions between the variables of porosity and depth. The local interpretation revealed the common features among outliers. Both the training and testing sets had moderate-low porosity (3–10%) and a lack of fractures. In the test set, calcite or quartz cementation also led to poor permeability predictions. The workflow that utilizes the tidymodels concept will be further applied in more complex examples to predict spatial petrophysical features from seismic attributes using various machine learning algorithms.


2021 ◽  
Author(s):  
Abdul Muqtadir Khan

Abstract With the advancement in machine learning (ML) applications, some recent research has been conducted to optimize fracturing treatments. There are a variety of models available using various objective functions for optimization and different mathematical techniques. There is a need to extend the ML techniques to optimize the choice of algorithm. For fracturing treatment design, the literature for comparative algorithm performance is sparse. The research predominantly shows that compared to the most commonly used regressors and classifiers, some sort of boosting technique consistently outperforms on model testing and prediction accuracy. A database was constructed for a heterogeneous reservoir. Four widely used boosting algorithms were used on the database to predict the design only from the output of a short injection/falloff test. Feature importance analysis was done on eight output parameters from the falloff analysis, and six were finalized for the model construction. The outputs selected for prediction were fracturing fluid efficiency, proppant mass, maximum proppant concentration, and injection rate. Extreme gradient boost (XGBoost), categorical boost (CatBoost), adaptive boost (AdaBoost), and light gradient boosting machine (LGBM) were the algorithms finalized for the comparative study. The sensitivity was done for a different number of classes (four, five, and six) to establish a balance between accuracy and prediction granularity. The results showed that the best algorithm choice was between XGBoost and CatBoost for the predicted parameters under certain model construction conditions. The accuracy for all outputs for the holdout sets varied between 80 and 92%, showing robust significance for a wider utilization of these models. Data science has contributed to various oil and gas industry domains and has tremendous applications in the stimulation domain. The research and review conducted in this paper add a valuable resource for the user to build digital databases and use the appropriate algorithm without much trial and error. Implementing this model reduced the complexity of the proppant fracturing treatment redesign process, enhanced operational efficiency, and reduced fracture damage by eliminating minifrac steps with crosslinked gel.


Author(s):  
Prof. Gowrishankar B S

Stock market is one of the most complicated and sophisticated ways to do business. Small ownerships, brokerage corporations, banking sectors, all depend on this very body to make revenue and divide risks; a very complicated model. However, this paper proposes to use machine learning algorithms to predict the future stock price for exchange by using pre-existing algorithms to help make this unpredictable format of business a little more predictable. The use of machine learning which makes predictions based on the values of current stock market indices by training on their previous values. Machine learning itself employs different models to make prediction easier and authentic. The data has to be cleansed before it can be used for predictions. This paper focuses on categorizing various methods used for predictive analytics in different domains to date, their shortcomings.


Predictive analytics is the examination of concerned data so that we can recognize the problem that may arise in the near future. Manufacturers are interested in quality control, and making sure that the whole factory is functioning at the best possible efficiency. Hence, it’s feasible to increase manufacturing quality, and expect needs throughout the factory with predictive analytics. Hence, we have proposed an application of predictive analytics in manufacturing sector especially focused on price prediction and demand prediction of various products that get manufactured on regular basis. We have trained and tested different machine learning algorithms that can be used to predict price as well as demand of a particular product using historical data about that product’s sales and other transactions. Out of these different tested algorithms, we have selected the regression tree algorithm which gives accuracy of 95.66% for demand prediction and 88.85% for price prediction. Therefore, Regression Tree is best suited for use in manufacturing sector as long as price prediction and demand prediction of a product is concerned. Thus, the proposed application can help the manufacturing sector to improve its overall functioning and efficiency using the price prediction and demand prediction of products.


2021 ◽  
Vol 73 (03) ◽  
pp. 25-30
Author(s):  
Srikanta Mishra ◽  
Jared Schuetter ◽  
Akhil Datta-Gupta ◽  
Grant Bromhal

Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy. It is useful to start with some definitions to establish a common vocabulary. Data analytics (DA)—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets Machine learning (ML)—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data Artificial intelligence (AI)—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating) Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI. In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 (Fig. 1). These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019). Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems.


Author(s):  
Prof. Dr. R. Sandhiya

In recent times, the diagnosis of heart disease has become a very critical task in the medical field. In the modern age, one person dies every minute due to heart disease. Data science has an important role in processing big amounts of data in the field of health sciences. Since the diagnosis of heart disease is a complex task, the assessment process should be automated to avoid the risks associated with it and alert the patient in advance. This paper uses the heart disease dataset available in the UCI Machine Learning Repository. The proposed work assesses the risk of heart disease in a patient by applying various data mining methods such as Naive Bayes, Decision Tree, KNN, Linear SVM, RBF SVM, Gaussian Process, Neural Network, Adabost, QDA and Random Forest. This paper provides a comparative study by analyzing the performance of various machine learning algorithms. Test results confirm that the KNN algorithm achieved the highest 97% accuracy compared to other implemented ML algorithms.


Sign in / Sign up

Export Citation Format

Share Document