Machine-driven earth exploration: Artificial intelligence in oil and gas

2021 ◽  
Vol 40 (4) ◽  
pp. 298-301
Author(s):  
Tariq Alkhalifah ◽  
Ali Almomin ◽  
Ali Naamani

Artificial intelligence (AI), specifically machine learning (ML), has emerged as a powerful tool to address many of the challenges we face as we try to illuminate the earth and make the proper prediction of its content. From fault detection, to salt boundary mapping, to image resolution enhancements, the quest to teach our computing devices how to perform these tasks accurately, as well as quantify the accuracy, has become a feasible and sought-after objective. Recent advances in ML algorithms and availability of the modules to apply such algorithms enabled geoscientists to focus on potential applications of such tools. As a result, we held the virtual workshop, Artificially Intelligent Earth Exploration Workshop: Teaching the Machine How to Characterize the Subsurface, 23–26 November 2020.

2019 ◽  
Vol 33 (2) ◽  
pp. 31-50 ◽  
Author(s):  
Ajay Agrawal ◽  
Joshua S. Gans ◽  
Avi Goldfarb

Recent advances in artificial intelligence are primarily driven by machine learning, a prediction technology. Prediction is useful because it is an input into decision-making. In order to appreciate the impact of artificial intelligence on jobs, it is important to understand the relative roles of prediction and decision tasks. We describe and provide examples of how artificial intelligence will affect labor, emphasizing differences between when the automation of prediction leads to automating decisions versus enhancing decision-making by humans.


Author(s):  
Helmi Zakariah

Mankind in its historical narrative – almost always immodestly regards itself as the most intelligent among all God’s creation. Either through a self – label of “Homo Sapiens” (the wise men) or the dogma of being the Khalifah (leader) of the earth. But what does it mean by intelligence? What is the epistemology (origin) of our collective knowledge? And does it bring us closer to wisdom? These points that we commonly take for granted, must be examined continuously in our trending pursuit of translating (or, imposing) our thinking architecture to machine learning and Artificial Intelligence. From the origin of the commonly-used term “algorithm” in A.I. (spoiler: it was originally coined by a Muslim mathematician of the 9th century, of a similar-sounding name) to the interjunction of A.I. and the concept of Ihsan, this plenary intends to demystify A.I. and an attempt to harmonize this leap-of-faith tool, into a tool for the faithfulInternational Journal of Human and Health Sciences Supplementary Issue: 2019 Page: 9


2020 ◽  
Author(s):  
Leonardo Guerreiro Azevedo ◽  
Renan Souza ◽  
Raphael Melo Thiago ◽  
Elton Soares ◽  
Marcio Moreno

Machine Learning (ML) is a core concept behind Artificial Intelligence systems, which work driven by data and generate ML models. These models are used for decision making, and it is crucial to trust their outputs by, e.g., understanding the process that derives them. One way to explain the derivation of ML models is by tracking the whole ML lifecycle, generating its data lineage, which may be accomplished by provenance data management techniques. In this work, we present the use of ProvLake tool for ML provenance data management in the ML lifecycle for Well Top Picking, an essential process in Oil and Gas exploration. We show how ProvLake supported the validation of ML models, the understanding of whether the ML models generalize respecting the domain characteristics, and their derivation.


2022 ◽  
pp. 71-85
Author(s):  
Satvik Tripathi ◽  
Thomas Heinrich Musiolik

Artificial intelligence has a huge array of current and potential applications in healthcare and medicine. Ethical issues arising due to algorithmic biases are one of the greatest challenges faced in the generalizability of AI models today. The authors address safety and regulatory barriers that impede data sharing in medicine as well as potential changes to existing techniques and frameworks that might allow ethical data sharing for machine learning. With these developments in view, they also present different algorithmic models that are being used to develop machine learning-based medical systems that will potentially evolve to be free of the sample, annotator, and temporal bias. These AI-based medical imaging models will then be completely implemented in healthcare facilities and institutions all around the world, even in the remotest areas, making diagnosis and patient care both cheaper and freely accessible.


2021 ◽  
Author(s):  
Ayman Amer ◽  
Ali Alshehri ◽  
Hamad Saiari ◽  
Ali Meshaikhis ◽  
Abdulaziz Alshamrany

Abstract Corrosion under insulation (CUI) is a critical challenge that affects the integrity of assets where the oil and gas industry is not immune. Its severity arises due to its hidden nature as it can often times go unnoticed. CUI is stimulated, in principle, by moisture ingress through the insulation layers to the surface of the pipeline. This Artificial Intelligence (AI)-powered detection technology stemmed from an urgent need to detect the presence of these corrosion types. The new approach is based on a Cyber Physical (CP) system that maximizes the potential of thermographic imaging by using a Machine Learning application of Artificial Intelligence. In this work, we describe how common image processing techniques from infra-red images of assets can be enhanced using a machine learning approach allowing the detection of locations highly vulnerable to corrosion through pinpointing locations of CUI anomalies and areas of concern. The machine learning is examining the progression of thermal images, captured over time, corrosion and factors that cause this degradation are predicted by extracting thermal anomaly features and correlating them with corrosion and irregularities in the structural integrity of assets verified visually during the initial learning phase of the ML algorithm. The ML classifier has shown outstanding results in predicting CUI anomalies with a predictive accuracy in the range of 85 – 90% projected from 185 real field assets. Also, IR imaging by itself is subjective and operator dependent, however with this cyber physical transfer learning approach, such dependency has been eliminated. The results and conclusions of this work on real field assets in operation demonstrate the feasibility of this technique to predict and detect thermal anomalies directly correlated to CUI. This innovative work has led to the development of a cyber-physical that meets the demands of inspection units across the oil and gas industry, providing a real-time system and online assessment tool to monitor the presence of CUI enhancing the output from thermography technologies, using Artificial Intelligence (AI) and machine learning technology. Additional benefits of this approach include safety enhancement through non-contact online inspection and cost savings by reducing the associated scaffolding and downtime.


Author(s):  
Edmund T. Rolls

The subject of this book is how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed. The book will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics.


Author(s):  
Andrew Briggs ◽  
Hans Halvorson ◽  
Andrew Steane

The chapter poses questions about personhood, and explores them through some philosophy, extended examples from machine learning and artificial intelligence, and religious reflection. Parfit’s Reasons and Persons and the use of game theory is explored. The question of human free will is framed as centring on the issue of responsibility. Recent advances in AI, especially learning systems such as AlphaGo, are presented. These do not settle any fundamental questions about the nature of consciousness, but they do encourage us to ask what our attitude to autonomous machines should be. The discussion then turns to human evolutionary development, and to what makes humans distinctive, touching on scientific, philosophical, and theological issues. Some aspects of philosophy and theology can be productively approached through storytelling; this fruitful method is seen at work in the Bible. To be responsible lies at the heart of what it means to be human.


BMC Medicine ◽  
2019 ◽  
Vol 17 (1) ◽  
Author(s):  
Christopher J. Kelly ◽  
Alan Karthikesalingam ◽  
Mustafa Suleyman ◽  
Greg Corrado ◽  
Dominic King

Abstract Background Artificial intelligence (AI) research in healthcare is accelerating rapidly, with potential applications being demonstrated across various domains of medicine. However, there are currently limited examples of such techniques being successfully deployed into clinical practice. This article explores the main challenges and limitations of AI in healthcare, and considers the steps required to translate these potentially transformative technologies from research to clinical practice. Main body Key challenges for the translation of AI systems in healthcare include those intrinsic to the science of machine learning, logistical difficulties in implementation, and consideration of the barriers to adoption as well as of the necessary sociocultural or pathway changes. Robust peer-reviewed clinical evaluation as part of randomised controlled trials should be viewed as the gold standard for evidence generation, but conducting these in practice may not always be appropriate or feasible. Performance metrics should aim to capture real clinical applicability and be understandable to intended users. Regulation that balances the pace of innovation with the potential for harm, alongside thoughtful post-market surveillance, is required to ensure that patients are not exposed to dangerous interventions nor deprived of access to beneficial innovations. Mechanisms to enable direct comparisons of AI systems must be developed, including the use of independent, local and representative test sets. Developers of AI algorithms must be vigilant to potential dangers, including dataset shift, accidental fitting of confounders, unintended discriminatory bias, the challenges of generalisation to new populations, and the unintended negative consequences of new algorithms on health outcomes. Conclusion The safe and timely translation of AI research into clinically validated and appropriately regulated systems that can benefit everyone is challenging. Robust clinical evaluation, using metrics that are intuitive to clinicians and ideally go beyond measures of technical accuracy to include quality of care and patient outcomes, is essential. Further work is required (1) to identify themes of algorithmic bias and unfairness while developing mitigations to address these, (2) to reduce brittleness and improve generalisability, and (3) to develop methods for improved interpretability of machine learning predictions. If these goals can be achieved, the benefits for patients are likely to be transformational.


Sign in / Sign up

Export Citation Format

Share Document