Interpretation Challenges and Solutions for Real-Time Asphaltene Paramagnetic Sensing at the Wellhead

2021 ◽  
Author(s):  
John Lovell ◽  
Dalia Salim Abdallah ◽  
Rahul Mark Fonseca ◽  
Mark Grutters ◽  
Sameer Punnapala ◽  
...  

Abstract Asphaltene deposition presents a significant flow assurance to oil production in many parts of the Middle East and beyond. Until recently, there had been no intervention-free approach to monitor deposition in the asphaltene affected wells. This prompted ADNOC to sponsor MicroSilicon to develop of an intervention less real-time sensor device to monitor asphaltene deposition. This new state-of-the-art device is currently installed and automatically collecting data at the wellhead and nearby facilities of an ADNOC operated field. Historic ways of measuring asphaltene in oil relied upon laboratory processes that extracted the asphaltene using a combination of solvents and gravimetric techniques. Paramagnetic techniques offer a potentially simpler alternative because it is known that the spins per gram of an oil is a constant property of that oil, at least when the oil is at constant temperature and pressure. Taking the device to the field means that any interpretation needs to be made independent of these properties. Additionally, the fluid entering the sensor is multiphase and subject to varying temperature and pressure which raises challenges for the conversion of raw spectroscopic data into asphaltene quantity and particle size. These challenges were addressed with a combination of hardware, software and cloud-based machine learning technologies. Oil from over two dozen wells has been sampled in real-time and confirmed that the asphaltene percentage does not just vary from well to well but is also a dynamic aspect of production, with some wells having relatively constant levels and others showing consistent variation. One other well was placed on continuous observation and showed a decrease in asphaltene level following a choke change at the surface. Diagnostic data enhanced by machine learning complements the asphaltene measurement and provides a much more complete picture of the flow assurance challenge than had been previously been available.

Author(s):  
Petar Radanliev ◽  
David De Roure ◽  
Kevin Page ◽  
Max Van Kleek ◽  
Omar Santos ◽  
...  

AbstractMultiple governmental agencies and private organisations have made commitments for the colonisation of Mars. Such colonisation requires complex systems and infrastructure that could be very costly to repair or replace in cases of cyber-attacks. This paper surveys deep learning algorithms, IoT cyber security and risk models, and established mathematical formulas to identify the best approach for developing a dynamic and self-adapting system for predictive cyber risk analytics supported with Artificial Intelligence and Machine Learning and real-time intelligence in edge computing. The paper presents a new mathematical approach for integrating concepts for cognition engine design, edge computing and Artificial Intelligence and Machine Learning to automate anomaly detection. This engine instigates a step change by applying Artificial Intelligence and Machine Learning embedded at the edge of IoT networks, to deliver safe and functional real-time intelligence for predictive cyber risk analytics. This will enhance capacities for risk analytics and assists in the creation of a comprehensive and systematic understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when Artificial Intelligence and Machine Learning technologies are migrated to the periphery of the internet and into local IoT networks.


2020 ◽  
Vol 50 (1) ◽  
pp. 1-25 ◽  
Author(s):  
Changwon Suh ◽  
Clyde Fare ◽  
James A. Warren ◽  
Edward O. Pyzer-Knapp

Machine learning, applied to chemical and materials data, is transforming the field of materials discovery and design, yet significant work is still required to fully take advantage of machine learning algorithms, tools, and methods. Here, we review the accomplishments to date of the community and assess the maturity of state-of-the-art, data-intensive research activities that combine perspectives from materials science and chemistry. We focus on three major themes—learning to see, learning to estimate, and learning to search materials—to show how advanced computational learning technologies are rapidly and successfully used to solve materials and chemistry problems. Additionally, we discuss a clear path toward a future where data-driven approaches to materials discovery and design are standard practice.


Author(s):  
Severin Sadjina ◽  
Stian Skjong ◽  
Armin Pobitzer ◽  
Lars T. Kyllingstad ◽  
Roy-Jostein Fiskerstrand ◽  
...  

Abstract Here, we present the R&D project Real-Time Digital Twin for Boosting Performance of Seismic Operations, which aims at increasing the overall operational efficiency of seismic vessels through digitisation and automation. The cornerstone in this project is the development of a real-time digital twin (RTDT) — a sophisticated mathematical model and state estimator of all the in-sea seismic equipment, augmented with real-time measurements from the actual equipment. This provides users and systems on-board the vessel with a live digital representation of the state of the equipment during operations. By combining the RTDT with state-of-the-art methods in machine learning and control theory, the project will develop new advisory and automation systems that improve the efficiency of seismic survey operations, reduce the risk of equipment damage, improve health monitoring and fault detection systems, and improve the quality of the seismic data. This will lead to less unproductive time, reduced costs, reduced fuel consumption and reduced emissions for a given operational scope. The main focus in this paper is the presentation of today’s challenges in offshore seismic surveys, and how state-of-the-art technology can be adopted to improve various operations. We discuss how simulation technology, machine learning and live sensor measurements can be integrated in on-board decision support and automation systems, and highlight the importance of such systems for designing the complex, autonomous offshore vessels of the future. Finally, we present some early results from the project in the form of two brief case studies.


Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 38
Author(s):  
David Novoa-Paradela ◽  
Óscar Fontenla-Romero ◽  
Bertha Guijarro-Berdiñas

Anomaly detection is a sub-area of machine learning that deals with the development of methods to distinguish among normal and anomalous data. Due to the frequent use of anomaly-detection systems in monitoring and the lack of methods capable of learning in real time, this research presents a new method that provides such online adaptability. The method bases its operation on the properties of scaled convex hulls. It begins building a convex hull, using a minimum set of data, that is adapted and subdivided along time to accurately fit the boundary of the normal class data. The model has online learning ability and its execution can be carried out in a distributed and parallel way, all of them interesting advantages when dealing with big datasets. The method has been compared to other state-of-the-art algorithms demonstrating its effectiveness.


Author(s):  
Petar Radanliev

This paper surveys deep learning algorithms, IoT cyber security and risk models, and established mathematical formulas to identify the best approach for developing a dynamic and self-adapting system for predictive cyber risk analytics supported with Artificial Intelligence and Machine Learning and real-time intelligence in edge computing. The paper presents a new mathematical approach for integrating concepts for cognition engine design, edge computing and Artificial Intelligence and Machine Learning to automate anomaly detection. This engine instigates a step change by applying Artificial Intelligence and Machine Learning embedded at the edge of IoT networks, to deliver safe and functional real-time intelligence for predictive cyber risk analytics. This will enhance capacities for risk analytics and assists in the creation of a comprehensive and systematic understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when Artificial Intelligence and Machine Learning technologies are migrated to the periphery of the internet and into local IoT networks.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2786
Author(s):  
Rani Baghezza ◽  
Kévin Bouchard ◽  
Abdenour Bouzouane ◽  
Charles Gouin-Vallerand

This review presents the state of the art and a global overview of research challenges of real-time distributed activity recognition in the field of healthcare. Offline activity recognition is discussed as a starting point to establish the useful concepts of the field, such as sensor types, activity labeling and feature extraction, outlier detection, and machine learning. New challenges and obstacles brought on by real-time centralized activity recognition such as communication, real-time activity labeling, cloud and local approaches, and real-time machine learning in a streaming context are then discussed. Finally, real-time distributed activity recognition is covered through existing implementations in the scientific literature, and six main angles of optimization are defined: Processing, memory, communication, energy, time, and accuracy. This survey is addressed to any reader interested in the development of distributed artificial intelligence as well activity recognition, regardless of their level of expertise.


2021 ◽  
Vol 2119 (1) ◽  
pp. 012109
Author(s):  
S Abdurakipov

Abstract The current coverage of oil wells with telemetry does not allow timely determination of deviations in the operation of about 40% of electric submersible pumps. To solve this problem, a model of virtual sensors has been developed that allows the prediction of temperature and pressure growth at the pump intake in the absence of submersible sensors based on modern big data processing and machine learning technologies. The developed models of virtual sensors are embedded directly into the process control system, which allows notifying the technologists and operators about a possible reduction in the planned average pump operating time and their possible failures for various reasons.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-26
Author(s):  
Yeli Feng ◽  
Daniel Jun Xian Ng ◽  
Arvind Easwaran

Uncertainties in machine learning are a significant roadblock for its application in safety-critical cyber-physical systems (CPS). One source of uncertainty arises from distribution shifts in the input data between training and test scenarios. Detecting such distribution shifts in real-time is an emerging approach to address the challenge. The high dimensional input space in CPS applications involving imaging adds extra difficulty to the task. Generative learning models are widely adopted for the task, namely out-of-distribution (OoD) detection. To improve the state-of-the-art, we studied existing proposals from both machine learning and CPS fields. In the latter, safety monitoring in real-time for autonomous driving agents has been a focus. Exploiting the spatiotemporal correlation of motion in videos, we can robustly detect hazardous motion around autonomous driving agents. Inspired by the latest advances in the Variational Autoencoder (VAE) theory and practice, we tapped into the prior knowledge in data to further boost OoD detection’s robustness. Comparison studies over nuScenes and Synthia data sets show our methods significantly improve detection capabilities of OoD factors unique to driving scenarios, 42% better than state-of-the-art approaches. Our model also generalized near-perfectly, 97% better than the state-of-the-art across the real-world and simulation driving data sets experimented. Finally, we customized one proposed method into a twin-encoder model that can be deployed to resource limited embedded devices for real-time OoD detection. Its execution time was reduced over four times in low-precision 8-bit integer inference, while detection capability is comparable to its corresponding floating-point model.


2022 ◽  
pp. 1-14
Author(s):  
Salem Al-Gharbi ◽  
Abdulaziz Al-Majed ◽  
Salaheldin Elkatatny ◽  
Abdulazeez Abdulraheem

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote environments conducting unconventional operations. In order to maintain safe, fast and more cost-effective operations, utilizing machine learning (ML) technologies has become a must. The harsh environments of drilling sites and the transmission setups, are negatively affecting the drilling data, leading to less than acceptable ML results. For that reason, big portion of ML development projects were actually spent on improving the data by data-quality experts. The objective of this paper is to evaluate the effectiveness of ML on improving the real-time drilling-data-quality and compare it to a human expert knowledge. To achieve that, two large real-time drilling datasets were used; one dataset was used to train three different ML techniques: artificial neural network (ANN), support vector machine (SVM) and decision tree (DT), the second dataset was used to evaluate it. The ML results were compared with the results of a real- time drilling data quality expert. Despite the complexity of ANN and good results in general, it achieved a relative root mean square error (RRMSE) of 2.83%, which was lower than DT and SVM technologies that achieved RRMSE of 0.35% and 0.48% respectively. The uniqueness of this work is in developing ML that simulates the improvement of drilling-data- quality by an expert. This research provides a guide for improving the quality of real-time drilling data.


2020 ◽  
Vol 4 (02) ◽  
pp. 116-120
Author(s):  
Srinath Damodaran ◽  
Arjun Alva ◽  
Srinath Kumar ◽  
Muralidhar Kanchi

AbstractThe creation of intelligent software or system, machine learning, and deep learning technologies are the integral components of artificial intelligence. Point-of-care ultrasound involves the bedside use of ultrasound to answer specific diagnostic questions and to assess real-time physiologic responses to treatment. This article provides insight into the pearls and pitfalls of artificial intelligence in point-of-care ultrasound for the coronavirus disease 2019 pandemic.


Sign in / Sign up

Export Citation Format

Share Document