Applications of supervised deep learning for seismic interpretation and inversion

2019 ◽  
Vol 38 (7) ◽  
pp. 526-533 ◽  
Author(s):  
York Zheng ◽  
Qie Zhang ◽  
Anar Yusifov ◽  
Yunzhi Shi

Recent advances in machine learning and its applications in various sectors are generating a new wave of experiments and solutions to solve geophysical problems in the oil and gas industry. We present two separate case studies in which supervised deep learning is used as an alternative to conventional techniques. The first case is an example of image classification applied to seismic interpretation. A convolutional neural network (CNN) is trained to pick faults automatically in 3D seismic volumes. Every sample in the input seismic image is classified as either a nonfault or fault with a certain dip and azimuth that are predicted simultaneously. The second case is an example of elastic model building — casting prestack seismic inversion as a machine learning regression problem. A CNN is trained to make predictions of 1D velocity and density profiles from input seismic records. In both case studies, we demonstrate that CNN models trained from synthetic data can be used to make efficient and effective predictions on field data. While results from the first example show that high-quality fault picks can be predicted from migrated seismic images, we find that it is more challenging in the prestack seismic inversion case where constraining the subsurface geologic variations and careful preconditioning of input seismic data are important for obtaining reasonably reliable results. This observation matches our experience using conventional workflows and methods, which also respond to improved signal to noise after migration and stack, and the inherent subsurface ambiguity makes unique parameter inversion difficult.

2020 ◽  
Vol 39 (10) ◽  
pp. 734-741
Author(s):  
Sébastien Guillon ◽  
Frédéric Joncour ◽  
Pierre-Emmanuel Barrallon ◽  
Laurent Castanié

We propose new metrics to measure the performance of a deep learning model applied to seismic interpretation tasks such as fault and horizon extraction. Faults and horizons are thin geologic boundaries (1 pixel thick on the image) for which a small prediction error could lead to inappropriately large variations in common metrics (precision, recall, and intersection over union). Through two examples, we show how classical metrics could fail to indicate the true quality of fault or horizon extraction. Measuring the accuracy of reconstruction of thin objects or boundaries requires introducing a tolerance distance between ground truth and prediction images to manage the uncertainties inherent in their delineation. We therefore adapt our metrics by introducing a tolerance function and illustrate their ability to manage uncertainties in seismic interpretation. We compare classical and new metrics through different examples and demonstrate the robustness of our metrics. Finally, we show on a 3D West African data set how our metrics are used to tune an optimal deep learning model.


2020 ◽  
Vol 12 (6) ◽  
pp. 2544
Author(s):  
Alice Consilvio ◽  
José Solís-Hernández ◽  
Noemi Jiménez-Redondo ◽  
Paolo Sanetti ◽  
Federico Papa ◽  
...  

The objective of this study is to show the applicability of machine learning and simulative approaches to the development of decision support systems for railway asset management. These techniques are applied within the generic framework developed and tested within the In2Smart project. The framework is composed by different building blocks, in order to show the complete process from data collection and knowledge extraction to the real-world decisions. The application of the framework to two different real-world case studies is described: the first case study deals with strategic earthworks asset management, while the second case study considers the tactical and operational planning of track circuits’ maintenance. Although different methodologies are applied and different planning levels are considered, both the case studies follow the same general framework, demonstrating the generality of the approach. The potentiality of combining machine learning techniques with simulative approaches to replicate real processes is shown, evaluating the key performance indicators employed within the considered asset management process. Finally, the results of the validation are reported as well as the developed human–machine interfaces for output visualization.


2019 ◽  
Author(s):  
Alexander Schaaf ◽  
Clare E. Bond

Abstract. In recent years uncertainty has been widely recognized in geosciences, leading to an increased need for its quantification. Predicting the subsurface is an especially uncertain effort, as our information either comes from spatially highly limited direct (1-D boreholes) or indirect 2-D and 3-D sources (e.g. seismic). And while uncertainty in seismic interpretation has been explored in 2-D, we currently lack both qualitatitive and quantitative understanding of how interpretational uncertainties of 3-D datasets are distributed. In this work we analyze 78 seismic interpretations done by final year undergraduate (BSc) students of a 3-D seismic dataset from the Gullfaks field located in the northern North Sea. The students used Petrel to interpret multiple (interlinked) faults and to pick the Base Cretaceous Unconformity and Top Ness horizon (part of the Mid-Jurassic Brent Group). We have developed open-source Python tools to explore and visualize the spatial uncertainty of the students fault stick interpretations, the subsequent variation in fault plane orientation and the uncertainty in fault network topology. The Top Ness horizon picks were used to analyze fault offset variations across the dataset and interpretations, with implications for fault throw. We investigate how this interpretational uncertainty interlinks with seismic data quality and the possible use of seismic data quality attributes as a proxy for interpretational uncertainty. Our work provides a first quantification of fault and horizon uncertainties in 3-D seismic interpretation, providing valuable insights into the influence of seismic image quality on 3-D interpretation, with implications for deterministic and stochastic geomodelling and machine learning.


2021 ◽  
Author(s):  
Hui Jiang

This lucid, accessible introduction to supervised machine learning presents core concepts in a focused and logical way that is easy for beginners to follow. The author assumes basic calculus, linear algebra, probability and statistics but no prior exposure to machine learning. Coverage includes widely used traditional methods such as SVMs, boosted trees, HMMs, and LDAs, plus popular deep learning methods such as convolution neural nets, attention, transformers, and GANs. Organized in a coherent presentation framework that emphasizes the big picture, the text introduces each method clearly and concisely “from scratch” based on the fundamentals. All methods and algorithms are described by a clean and consistent style, with a minimum of unnecessary detail. Numerous case studies and concrete examples demonstrate how the methods can be applied in a variety of contexts.


Solid Earth ◽  
2019 ◽  
Vol 10 (4) ◽  
pp. 1049-1061 ◽  
Author(s):  
Alexander Schaaf ◽  
Clare E. Bond

Abstract. In recent years, uncertainty has been widely recognized in geosciences, leading to an increased need for its quantification. Predicting the subsurface is an especially uncertain effort, as our information either comes from spatially highly limited direct (1-D boreholes) or indirect 2-D and 3-D sources (e.g., seismic). And while uncertainty in seismic interpretation has been explored in 2-D, we currently lack both qualitative and quantitative understanding of how interpretational uncertainties of 3-D datasets are distributed. In this work, we analyze 78 seismic interpretations done by final-year undergraduate (BSc) students of a 3-D seismic dataset from the Gullfaks field located in the northern North Sea. The students used Petrel to interpret multiple (interlinked) faults and to pick the Base Cretaceous Unconformity and Top Ness horizon (part of the Middle Jurassic Brent Group). We have developed open-source Python tools to explore and visualize the spatial uncertainty of the students' fault stick interpretations, the subsequent variation in fault plane orientation and the uncertainty in fault network topology. The Top Ness horizon picks were used to analyze fault offset variations across the dataset and interpretations, with implications for fault throw. We investigate how this interpretational uncertainty interlinks with seismic data quality and the possible use of seismic data quality attributes as a proxy for interpretational uncertainty. Our work provides a first quantification of fault and horizon uncertainties in 3-D seismic interpretation, providing valuable insights into the influence of seismic image quality on 3-D interpretation, with implications for deterministic and stochastic geomodeling and machine learning.


2020 ◽  
pp. 1-38
Author(s):  
Amandeep Kaur ◽  
◽  
Anjum Mohammad Aslam ◽  

In this chapter we discuss the core concept of Artificial Intelligence. We define the term of Artificial Intelligence and its interconnected terms such as Machine learning, deep learning, Neural Networks. We describe the concept with the perspective of its usage in the area of business. We further analyze various applications and case studies which can be achieved using Artificial Intelligence and its sub fields. In the area of business already numerous Artificial Intelligence applications are being utilized and will be expected to be utilized more in the future where machines will improve the Artificial Intelligence, Natural language processing, Machine learning abilities of humans in various zones.


2021 ◽  
Vol 11 (15) ◽  
pp. 6912
Author(s):  
Jiaxin Tang ◽  
Yang Chen ◽  
Guozhen She ◽  
Yang Xu ◽  
Kewei Sha ◽  
...  

Google Scholar has been a widely used platform for academic performance evaluation and citation analysis. The issue about the mis-configuration of author profiles may seriously damage the reliability of the data, and thus affect the accuracy of analysis. Therefore, it is important to detect the mis-configured author profiles. Dealing with this issue is challenging because the scale of the dataset is large and manual annotation is time-consuming and relatively subjective. In this paper, we first collect a dataset of Google Scholar’s author profiles in the field of computer science and compare the mis-configured author profiles with the reliable ones. Then, we propose an integrated model that utilizes machine learning and node embedding to automatically detect mis-configured author profiles. Additionally, we conduct two application case studies based on the data of Google Scholar, i.e., outstanding scholar searching and university ranking, to demonstrate how the improved dataset after filtering out the mis-configured author profiles will change the results. The two case studies validate the importance and meaningfulness of the detection of mis-configured author profiles.


2019 ◽  
Vol 7 (3) ◽  
pp. SF15-SF26
Author(s):  
Francesco Picetti ◽  
Vincenzo Lipari ◽  
Paolo Bestagini ◽  
Stefano Tubaro

The advent of new deep-learning and machine-learning paradigms enables the development of new solutions to tackle the challenges posed by new geophysical imaging applications. For this reason, convolutional neural networks (CNNs) have been deeply investigated as novel tools for seismic image processing. In particular, we have studied a specific CNN architecture, the generative adversarial network (GAN), through which we process seismic migrated images to obtain different kinds of output depending on the application target defined during training. We have developed two proof-of-concept applications. In the first application, a GAN is trained to turn a low-quality migrated image into a high-quality one, as if the acquisition geometry was much more dense than in the input. In the second example, the GAN is trained to turn a migrated image into the respective deconvolved reflectivity image. The effectiveness of the investigated approach is validated by means of tests performed on synthetic examples.


Author(s):  
Thorben Moos ◽  
Felix Wegener ◽  
Amir Moradi

In recent years, deep learning has become an attractive ingredient to side-channel analysis (SCA) due to its potential to improve the success probability or enhance the performance of certain frequently executed tasks. One task that is commonly assisted by machine learning techniques is the profiling of a device’s leakage behavior in order to carry out a template attack. At CHES 2019, deep learning has also been applied to non-profiled scenarios for the first time, extending its reach within SCA beyond template attacks. The proposed method, called DDLA, has some tempting advantages over traditional SCA due to merits inherited from (convolutional) neural networks. Most notably, it greatly reduces the need for pre-processing steps< when the SCA traces are misaligned or when the leakage is of a multivariate nature. However, similar to traditional attack scenarios the success of this approach highly depends on the correct choice of a leakage model and the intermediate value to target. In this work we explore, for the first time in literature, whether deep learning can similarly be used as an instrument to advance another crucial (non-profiled) discipline of SCA which is inherently independent of leakage models and targeted intermediates, namely leakage assessment. In fact, given the simple classification-based nature of common leakage assessment techniques, in particular distinguishing two groups fixed-vs-random or fixed-vs-fixed, it comes as a surprise that machine learning has not been brought into this context, yet. Our contribution is the development of the first full leakage assessment methodology based on deep learning. It gives the evaluator the freedom to not worry about location, alignment and statistical order of the leakages and easily covers multivariate and horizontal patterns as well. We test our approach against a number of case studies based on FPGA, ASIC and μC implementations of the PRESENT block cipher, equipped with state-of-the-art SCA countermeasures. Our results clearly show that the proposed methodology and network structures are robust across all case studies and outperform the classical detection approaches (t-test and X2-test) in all considered scenarios.


2021 ◽  
Vol 9 (1) ◽  
pp. 49
Author(s):  
Darwan Darwan ◽  
Hindayati Mustafidah

Currently the introduction and detection of heart abnormalities using electrocardiogram (ECG) is very much. ECG conducted many research approaches in various methods, one of which is wavelet. This article aims to explain the trends of ECG research using wavelet approach in the last ten years. We reviewed journals with the keyword title "ecg wavelet" and published from 2011 to 2020. Articles classified by the most frequently discussed topics include: datasets, case studies, pre-processing, feature extraction and classification/identification methods. The increase in the number of ECG-related articles in recent years is still growing in new ways and methods. This study is very interesting because only a few researchers focus on researching about it. Several approaches from many researchers are used to obtain the best results, both by using machine learning and deep learning. This article will provide further explanation of the most widely used algorithms against ECG research with wavelet approaches. At the end of this article it is also shown that the critical aspect of ECG research can be done in the future is the use of datasets, as well as the extraction of characteristics and classifications by looking at the level of accuracy.


Sign in / Sign up

Export Citation Format

Share Document