Real Time Implementation of ESP Predictive Analytics - Towards Value Realization from Data Science

2021 ◽  
Author(s):  
Antonio Andrade Marin ◽  
Issa Al Balushi ◽  
Adnan Al Ghadani ◽  
Hassana Al Abri ◽  
Abdullah Khalfan Said Al Zaabi ◽  
...  

Abstract Failure Prediction in Oil and Gas Artificial Lift Systems is materializing through the implementation of advanced analytics driven by physics-based models. During the Phase I of this project, two early failure prediction machine learning models were trained offline with historical data and evaluated through a blind test. The next challenge, Phase II, is to operationalize these models on Real-Time and re-assess their accuracy, precision and early prediction (in days) while having the assets focusing on either extending the runtime through optimization, chemical injection, etc. or proactive pump replacement (PPR) for high producers wells with triggered early prediction alarms. The paper details Phase II of live prediction for two assets consisting of 740 wells to enable data-driven insights in engineers’ daily workflow. In Phase I, a collaboration between SMEs and Data Scientists was established to build two failure prediction models for Electrical Submersible Pumps (ESP) using historical data that could identify failure prone wells along with the component at risk with high precision. Phase II entails the development of a Real-Time scoring pipeline to avail daily insights from this model for live wells. To achieve this, PDO leveraged its Digital Infrastructure for extraction of high-resolution measured data for 750 wells daily. A Well Management System (WMS) automatically sustains physics-based ESP models to calculate engineering variables from nodal analysis. Measured and engineered data are sampled, and referencing learnt patterns, the machine learning algorithm (MLA) estimates the probability of failure based on a daily rolling data window. An Exception Based Surveillance (EBS) system tracks well failure probability and highlights affected wells based on business logic. A visualization is developed to facilitate EBS interpretation. All the above steps are automated and synchronized among data historian, WMS and EBS System to operate on a daily schedule. From the Asset, at each highlighted exception, a focus team of well owners and SME initiate a review to correlate the failure probability with ESP signatures to validate the alarm. Aided by physics-based well models, action is directed either towards a) optimization, b) troubleshooting or c) proactive pump replacement in case of inevitable failure conditions. This workflow enables IT infrastructure and Asset readiness to benefit from various modeling initiatives in subsequent phases. Live Implementation of Exceptions from Predictive Analytics is an effective complement to well owners for prioritization of well reviews. Based on alarm validity, risk of failure and underperformance – optimizations, PPRs or workover scheduling are performed with reliability. This methodology would enable a Phase III of scaling up in Real-Time with growing assets wherethe system would be periodically retrained on True Negatives and maintained automatically with minimum manual intervention. It is experienced that a high precision model alone is not enough to reap the benefits of Predictive Analytics. The ability to operate in a production mode and embedding insights into decisions and actions, determines ROI on Data Science initiatives. Digital Infrastructure, a Real Time Well Modeling Platform and Cognitive adaptation of analytics by Well Owners are key for this operationalization that demands reliable data quality, computational efficiency, and data-driven decisions philosophy.

2021 ◽  
Author(s):  
Rodrigo Chamusca Machado ◽  
Fabbio Leite ◽  
Cristiano Xavier ◽  
Alberto Albuquerque ◽  
Samuel Lima ◽  
...  

Objectives/Scope This paper presents how a brazilian Drilling Contractor and a startup built a partnership to optimize the maintenance window of subsea blowout preventers (BOPs) using condition-based maintenance (CBM). It showcases examples of insights about the operational conditions of its components, which were obtained by applying machine learning techniques in real time and historic, structured or unstructured, data. Methods, Procedures, Process From unstructured and structured historical data, which are generated daily from BOP operations, a knowledge bank was built and used to develop normal functioning models. This has been possible even without real-time data, as it has been tested with large sets of operational data collected from event log text files. Software retrieves the data from Event Loggers and creates structured database, comprising analog variables, warnings, alarms and system information. Using machine learning algorithms, the historical data is then used to develop normal behavior modeling for the target components. Thereby, it is possible to use the event logger or real time data to identify abnormal operation moments and detect failure patterns. Critical situations are immediately transmitted to the RTOC (Real-time Operations Center) and management team, while less critical alerts are recorded in the system for further investigation. Results, Observations, Conclusions During the implementation period, Drilling Contractor was able to identify a BOP failure using the detection algorithms and used 100% of the information generated by the system and reports to efficiently plan for equipment maintenance. The system has also been intensively used for incident investigation, helping to identify root causes through data analytics and retro-feeding the machine learning algorithms for future automated failure predictions. This development is expected to significantly reduce the risk of BOP retrieval during the operation for corrective maintenance, increased staff efficiency in maintenance activities, reducing the risk of downtime and improving the scope of maintenance during operational windows, and finally reduction in the cost of spare parts replacementduring maintenance without impact on operational safety. Novel/Additive Information For the near future, the plan is to integrate the system with the Computerized Maintenance Management System (CMMS), checking for historical maintenance, overdue maintenance, certifications, at the same place and time that we are getting real-time operational data and insights. Using real-time data as input, we expect to expand the failure prediction application for other BOP parts (such as regulators, shuttle valves, SPMs (Submounted Plate valves), etc) and increase the applicability for other critical equipment on the rig.


2021 ◽  
Author(s):  
Paulinus Abhyudaya Bimastianto ◽  
Shreepad Purushottam Khambete ◽  
Hamdan Mohamed Alsaadi ◽  
Suhail Mohammed Al Ameri ◽  
Erwan Couzigou ◽  
...  

Abstract This project used predictive analytics and machine learning-based modeling to detect drilling anomalies, namely stuck pipe events. Analysis focused on historical drilling data and real-time operational data to address the limitations of physics-based modeling. This project was designed to enable drilling crews to minimize downtime and non-productive time through real-time anomaly management. The solution used data science techniques to overcome data consistency/quality issues and flag drilling anomalies leading to a stuck pipe event. Predictive machine learning models were deployed across seven wells in different fields. The models analyzed both historical and real-time data across various data channels to identify anomalies (difficulties that impact non-productive time). The modeling approach mimicked the behavior of drillers using surface parameters. Small deviations from normal behavior were identified based on combinations of surface parameters, and automated machine learning was used to accelerate and optimize the modeling process. The output was a risk score that flags deviations in rig surface parameters. During the development phase, multiple data science approaches were attempted to monitor the overall health of the drilling process. They analyzed both historical and real-time data from torque, hole depth and deviation, standpipe pressure, and various other data channels. The models detected drilling anomalies with a harmonic model accuracy of 80% and produced valid alerts on 96% of stuck pipe and tight hole events. The average forewarning was two hours. This allowed personnel ample time to make corrections before stuck pipe events could occur. This also enabled the drilling operator to save the company upwards of millions of dollars in drilling costs and downtime. This project introduced novel data aggregation and deep learning-based normal behavior modeling methods. It demonstrates the benefits of adopting predictive analytics and machine learning in drilling operations. The approach enabled operators to mitigate data issues and demonstrate real-time, high-frequency and high-accuracy predictions. As a result, the operator was able to significantly reduce non-productive time.


TAPPI Journal ◽  
2019 ◽  
Vol 18 (11) ◽  
pp. 679-689
Author(s):  
CYDNEY RECHTIN ◽  
CHITTA RANJAN ◽  
ANTHONY LEWIS ◽  
BETH ANN ZARKO

Packaging manufacturers are challenged to achieve consistent strength targets and maximize production while reducing costs through smarter fiber utilization, chemical optimization, energy reduction, and more. With innovative instrumentation readily accessible, mills are collecting vast amounts of data that provide them with ever increasing visibility into their processes. Turning this visibility into actionable insight is key to successfully exceeding customer expectations and reducing costs. Predictive analytics supported by machine learning can provide real-time quality measures that remain robust and accurate in the face of changing machine conditions. These adaptive quality “soft sensors” allow for more informed, on-the-fly process changes; fast change detection; and process control optimization without requiring periodic model tuning. The use of predictive modeling in the paper industry has increased in recent years; however, little attention has been given to packaging finished quality. The use of machine learning to maintain prediction relevancy under everchanging machine conditions is novel. In this paper, we demonstrate the process of establishing real-time, adaptive quality predictions in an industry focused on reel-to-reel quality control, and we discuss the value created through the availability and use of real-time critical quality.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2910
Author(s):  
Andreas Andreou ◽  
Constandinos X. Mavromoustakis ◽  
George Mastorakis ◽  
Jordi Mongay Batalla ◽  
Evangelos Pallis

Various research approaches to COVID-19 are currently being developed by machine learning (ML) techniques and edge computing, either in the sense of identifying virus molecules or in anticipating the risk analysis of the spread of COVID-19. Consequently, these orientations are elaborating datasets that derive either from WHO, through the respective website and research portals, or from data generated in real-time from the healthcare system. The implementation of data analysis, modelling and prediction processing is performed through multiple algorithmic techniques. The lack of these techniques to generate predictions with accuracy motivates us to proceed with this research study, which elaborates an existing machine learning technique and achieves valuable forecasts by modification. More specifically, this study modifies the Levenberg–Marquardt algorithm, which is commonly beneficial for approaching solutions to nonlinear least squares problems, endorses the acquisition of data driven from IoT devices and analyses these data via cloud computing to generate foresight about the progress of the outbreak in real-time environments. Hence, we enhance the optimization of the trend line that interprets these data. Therefore, we introduce this framework in conjunction with a novel encryption process that we are proposing for the datasets and the implementation of mortality predictions.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ann-Marie Mallon ◽  
Dieter A. Häring ◽  
Frank Dahlke ◽  
Piet Aarden ◽  
Soroosh Afyouni ◽  
...  

Abstract Background Novartis and the University of Oxford’s Big Data Institute (BDI) have established a research alliance with the aim to improve health care and drug development by making it more efficient and targeted. Using a combination of the latest statistical machine learning technology with an innovative IT platform developed to manage large volumes of anonymised data from numerous data sources and types we plan to identify novel patterns with clinical relevance which cannot be detected by humans alone to identify phenotypes and early predictors of patient disease activity and progression. Method The collaboration focuses on highly complex autoimmune diseases and develops a computational framework to assemble a research-ready dataset across numerous modalities. For the Multiple Sclerosis (MS) project, the collaboration has anonymised and integrated phase II to phase IV clinical and imaging trial data from ≈35,000 patients across all clinical phenotypes and collected in more than 2200 centres worldwide. For the “IL-17” project, the collaboration has anonymised and integrated clinical and imaging data from over 30 phase II and III Cosentyx clinical trials including more than 15,000 patients, suffering from four autoimmune disorders (Psoriasis, Axial Spondyloarthritis, Psoriatic arthritis (PsA) and Rheumatoid arthritis (RA)). Results A fundamental component of successful data analysis and the collaborative development of novel machine learning methods on these rich data sets has been the construction of a research informatics framework that can capture the data at regular intervals where images could be anonymised and integrated with the de-identified clinical data, quality controlled and compiled into a research-ready relational database which would then be available to multi-disciplinary analysts. The collaborative development from a group of software developers, data wranglers, statisticians, clinicians, and domain scientists across both organisations has been key. This framework is innovative, as it facilitates collaborative data management and makes a complicated clinical trial data set from a pharmaceutical company available to academic researchers who become associated with the project. Conclusions An informatics framework has been developed to capture clinical trial data into a pipeline of anonymisation, quality control, data exploration, and subsequent integration into a database. Establishing this framework has been integral to the development of analytical tools.


Predictive analytics is the examination of concerned data so that we can recognize the problem that may arise in the near future. Manufacturers are interested in quality control, and making sure that the whole factory is functioning at the best possible efficiency. Hence, it’s feasible to increase manufacturing quality, and expect needs throughout the factory with predictive analytics. Hence, we have proposed an application of predictive analytics in manufacturing sector especially focused on price prediction and demand prediction of various products that get manufactured on regular basis. We have trained and tested different machine learning algorithms that can be used to predict price as well as demand of a particular product using historical data about that product’s sales and other transactions. Out of these different tested algorithms, we have selected the regression tree algorithm which gives accuracy of 95.66% for demand prediction and 88.85% for price prediction. Therefore, Regression Tree is best suited for use in manufacturing sector as long as price prediction and demand prediction of a product is concerned. Thus, the proposed application can help the manufacturing sector to improve its overall functioning and efficiency using the price prediction and demand prediction of products.


2021 ◽  
Vol 73 (03) ◽  
pp. 25-30
Author(s):  
Srikanta Mishra ◽  
Jared Schuetter ◽  
Akhil Datta-Gupta ◽  
Grant Bromhal

Algorithms are taking over the world, or so we are led to believe, given their growing pervasiveness in multiple fields of human endeavor such as consumer marketing, finance, design and manufacturing, health care, politics, sports, etc. The focus of this article is to examine where things stand in regard to the application of these techniques for managing subsurface energy resources in domains such as conventional and unconventional oil and gas, geologic carbon sequestration, and geothermal energy. It is useful to start with some definitions to establish a common vocabulary. Data analytics (DA)—Sophisticated data collection and analysis to understand and model hidden patterns and relationships in complex, multivariate data sets Machine learning (ML)—Building a model between predictors and response, where an algorithm (often a black box) is used to infer the underlying input/output relationship from the data Artificial intelligence (AI)—Applying a predictive model with new data to make decisions without human intervention (and with the possibility of feedback for model updating) Thus, DA can be thought of as a broad framework that helps determine what happened (descriptive analytics), why it happened (diagnostic analytics), what will happen (predictive analytics), or how can we make something happen (prescriptive analytics) (Sankaran et al. 2019). Although DA is built upon a foundation of classical statistics and optimization, it has increasingly come to rely upon ML, especially for predictive and prescriptive analytics (Donoho 2017). While the terms DA, ML, and AI are often used interchangeably, it is important to recognize that ML is basically a subset of DA and a core enabling element of the broader application for the decision-making construct that is AI. In recent years, there has been a proliferation in studies using ML for predictive analytics in the context of subsurface energy resources. Consider how the number of papers on ML in the OnePetro database has been increasing exponentially since 1990 (Fig. 1). These trends are also reflected in the number of technical sessions devoted to ML/AI topics in conferences organized by SPE, AAPG, and SEG among others; as wells as books targeted to practitioners in these professions (Holdaway 2014; Mishra and Datta-Gupta 2017; Mohaghegh 2017; Misra et al. 2019). Given these high levels of activity, our goal is to provide some observations and recommendations on the practice of data-driven model building using ML techniques. The observations are motivated by our belief that some geoscientists and petroleum engineers may be jumping the gun by applying these techniques in an ad hoc manner without any foundational understanding, whereas others may be holding off on using these methods because they do not have any formal ML training and could benefit from some concrete advice on the subject. The recommendations are conditioned by our experience in applying both conventional statistical modeling and data analytics approaches to practical problems.


2020 ◽  
Author(s):  
Jung-Hyun Kim ◽  
Simon I. Briceno ◽  
Cedric Y. Justin ◽  
Dimitri Mavris

2021 ◽  
Author(s):  
Menzi Skhosana ◽  
Absalom Ezugwu

The era of Big Data and the Internet of Things is upon us, and it is time for developing countries to take advantage of and pragmatically apply these ideas to solve real-world problems. Many problems faced daily by the public transportation sector can be resolved or mitigated through the collection of appropriate data and application of predictive analytics. In this body of work, we are primarily focused on problems affecting public transport buses. These include the unavailability of real-time information to commuters about the current status of a given bus or travel route; and the inability of bus operators to efficiently assign available buses to routes for a given day based on expected demand for a particular route. A cloud-based system was developed to address the aforementioned. This system is composed of two subsystems, namely a mobile application for commuters to provide the current location and availability of a given bus and other related information, which can also be used by drivers so that the bus can be tracked in real-time and collect ridership information throughout the day, and a web application that serves as a dashboard for bus operators to gain insights from the collected ridership data. These were integrated with a machine learning model trained on collected ridership data to predict the daily ridership for a given route. Our novel system provides a holistic solution to problems in the public transport sector, as it is highly scalable, cost-efficient and takes full advantage of the currently available technologies in comparison with other previous work in this topic.


Sign in / Sign up

Export Citation Format

Share Document