scholarly journals Quantification of uncertainty in 3-D seismic interpretation: implications for deterministic and stochastic geomodelling and machine learning

2019 ◽  
Author(s):  
Alexander Schaaf ◽  
Clare E. Bond

Abstract. In recent years uncertainty has been widely recognized in geosciences, leading to an increased need for its quantification. Predicting the subsurface is an especially uncertain effort, as our information either comes from spatially highly limited direct (1-D boreholes) or indirect 2-D and 3-D sources (e.g. seismic). And while uncertainty in seismic interpretation has been explored in 2-D, we currently lack both qualitatitive and quantitative understanding of how interpretational uncertainties of 3-D datasets are distributed. In this work we analyze 78 seismic interpretations done by final year undergraduate (BSc) students of a 3-D seismic dataset from the Gullfaks field located in the northern North Sea. The students used Petrel to interpret multiple (interlinked) faults and to pick the Base Cretaceous Unconformity and Top Ness horizon (part of the Mid-Jurassic Brent Group). We have developed open-source Python tools to explore and visualize the spatial uncertainty of the students fault stick interpretations, the subsequent variation in fault plane orientation and the uncertainty in fault network topology. The Top Ness horizon picks were used to analyze fault offset variations across the dataset and interpretations, with implications for fault throw. We investigate how this interpretational uncertainty interlinks with seismic data quality and the possible use of seismic data quality attributes as a proxy for interpretational uncertainty. Our work provides a first quantification of fault and horizon uncertainties in 3-D seismic interpretation, providing valuable insights into the influence of seismic image quality on 3-D interpretation, with implications for deterministic and stochastic geomodelling and machine learning.

Solid Earth ◽  
2019 ◽  
Vol 10 (4) ◽  
pp. 1049-1061 ◽  
Author(s):  
Alexander Schaaf ◽  
Clare E. Bond

Abstract. In recent years, uncertainty has been widely recognized in geosciences, leading to an increased need for its quantification. Predicting the subsurface is an especially uncertain effort, as our information either comes from spatially highly limited direct (1-D boreholes) or indirect 2-D and 3-D sources (e.g., seismic). And while uncertainty in seismic interpretation has been explored in 2-D, we currently lack both qualitative and quantitative understanding of how interpretational uncertainties of 3-D datasets are distributed. In this work, we analyze 78 seismic interpretations done by final-year undergraduate (BSc) students of a 3-D seismic dataset from the Gullfaks field located in the northern North Sea. The students used Petrel to interpret multiple (interlinked) faults and to pick the Base Cretaceous Unconformity and Top Ness horizon (part of the Middle Jurassic Brent Group). We have developed open-source Python tools to explore and visualize the spatial uncertainty of the students' fault stick interpretations, the subsequent variation in fault plane orientation and the uncertainty in fault network topology. The Top Ness horizon picks were used to analyze fault offset variations across the dataset and interpretations, with implications for fault throw. We investigate how this interpretational uncertainty interlinks with seismic data quality and the possible use of seismic data quality attributes as a proxy for interpretational uncertainty. Our work provides a first quantification of fault and horizon uncertainties in 3-D seismic interpretation, providing valuable insights into the influence of seismic image quality on 3-D interpretation, with implications for deterministic and stochastic geomodeling and machine learning.


2020 ◽  
Author(s):  
Alexander Schaaf ◽  
Miguel de la Varga ◽  
Clare E. Bond ◽  
Florian Wellmann

<p>Seismic data plays a key role in developing our understanding of the subsurface by providing 2-D and 3-D indirect imaging. But the resulting data needs to be interpreted by specialists using time-intensive, error-prone and subjective manual labour. While the automation of data classification using Machine Learning algorithms is starting to show promising results in areas of good data quality, the classification of noisy and ambiguous data will continue to require geological reasoning for the foreseeable future. In Schaaf & Bond (2019) we provided a first quantification of the uncertainties involved in the structural interpretation of a 3-D seismic volume by analysing 78 student interpretations of the Gullfaks field in the northern North Sea. Our work also concretized the question of to which degree the seismic data itself could provide useful information towards a prediction of interpretation uncertainty.</p><p>We now look at the same dataset in an effort to answer the question if we can adequately reproduce the observed interpretation uncertainties by approximating them as aleatoric uncertainties in a stochastic geomodeling framework. For this we make use of the Python-based open-source 3-D implicit structural geomodeling software GemPy to leverage open-source probabilistic programming frameworks and to allow for scientific reproducibility of our results. We identify potential shortcomings of collapsing interpretation uncertainties into aleatoric uncertainties and present ideas on how to improve stochastic parametrization based on the seismic data at hand.</p><p> </p><p>Schaaf, A., & Bond, C. E. (2019). Quantification of uncertainty in 3-D seismic interpretation: Implications for deterministic and stochastic geomodeling and machine learning. Solid Earth, 10(4), 1049–1061. https://doi.org/10.5194/se-10-1049-2019</p>


2019 ◽  
Vol 38 (7) ◽  
pp. 526-533 ◽  
Author(s):  
York Zheng ◽  
Qie Zhang ◽  
Anar Yusifov ◽  
Yunzhi Shi

Recent advances in machine learning and its applications in various sectors are generating a new wave of experiments and solutions to solve geophysical problems in the oil and gas industry. We present two separate case studies in which supervised deep learning is used as an alternative to conventional techniques. The first case is an example of image classification applied to seismic interpretation. A convolutional neural network (CNN) is trained to pick faults automatically in 3D seismic volumes. Every sample in the input seismic image is classified as either a nonfault or fault with a certain dip and azimuth that are predicted simultaneously. The second case is an example of elastic model building — casting prestack seismic inversion as a machine learning regression problem. A CNN is trained to make predictions of 1D velocity and density profiles from input seismic records. In both case studies, we demonstrate that CNN models trained from synthetic data can be used to make efficient and effective predictions on field data. While results from the first example show that high-quality fault picks can be predicted from migrated seismic images, we find that it is more challenging in the prestack seismic inversion case where constraining the subsurface geologic variations and careful preconditioning of input seismic data are important for obtaining reasonably reliable results. This observation matches our experience using conventional workflows and methods, which also respond to improved signal to noise after migration and stack, and the inherent subsurface ambiguity makes unique parameter inversion difficult.


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. IM13-IM20 ◽  
Author(s):  
Xinming Wu ◽  
Guillaume Caumon

Well-seismic ties allow rock properties measured at well locations to be compared with seismic data and are therefore useful for seismic interpretation. Numerous methods have been proposed to compute well-seismic ties by correlating real seismograms with synthetic seismograms computed from velocity and density logs. However, most methods tie multiple wells to seismic data one by one; hence, they do not guarantee lateral consistency among multiple well ties. We therefore propose a method to simultaneously tie multiple wells to seismic data. In this method, we first flatten synthetic and corresponding real seismograms so that all seismic reflectors are horizontally aligned. By doing this, we turn multiple well-seismic tying into a 1D correlation problem. We then compute only vertically variant but laterally constant shifts to correlate these horizontally aligned (flattened) synthetic and real seismograms. This two-step correlation method maintains lateral consistency among multiple well ties by computing a laterally and vertically optimized correlation of all synthetic and real seismograms. We applied our method to a 3D real seismic image with multiple wells and obtained laterally consistent well-seismic ties.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6347
Author(s):  
Alimed Celecia ◽  
Karla Figueiredo ◽  
Carlos Rodriguez ◽  
Marley Vellasco ◽  
Edwin Maldonado ◽  
...  

Seismic interpretation is a fundamental process for hydrocarbon exploration. This activity comprises identifying geological information through the processing and analysis of seismic data represented by different attributes. The interpretation process presents limitations related to its high data volume, own complexity, time consumption, and uncertainties incorporated by the experts’ work. Unsupervised machine learning models, by discovering underlying patterns in the data, can represent a novel approach to provide an accurate interpretation without any reference or label, eliminating the human bias. Therefore, in this work, we propose exploring multiple methodologies based on unsupervised learning algorithms to interpret seismic data. Specifically, two strategies considering classical clustering algorithms and image segmentation methods, combined with feature selection, were evaluated to select the best possible approach. Additionally, the resultant groups of the seismic data were associated with groups obtained from well logs of the same area, producing an interpretation with aggregated lithologic information. The resultant seismic groups correctly represented the main seismic facies and correlated adequately with the groups obtained from the well logs data.


2021 ◽  
Vol 5 (3) ◽  
pp. 1-30
Author(s):  
Gonçalo Jesus ◽  
António Casimiro ◽  
Anabela Oliveira

Sensor platforms used in environmental monitoring applications are often subject to harsh environmental conditions while monitoring complex phenomena. Therefore, designing dependable monitoring systems is challenging given the external disturbances affecting sensor measurements. Even the apparently simple task of outlier detection in sensor data becomes a hard problem, amplified by the difficulty in distinguishing true data errors due to sensor faults from deviations due to natural phenomenon, which look like data errors. Existing solutions for runtime outlier detection typically assume that the physical processes can be accurately modeled, or that outliers consist in large deviations that are easily detected and filtered by appropriate thresholds. Other solutions assume that it is possible to deploy multiple sensors providing redundant data to support voting-based techniques. In this article, we propose a new methodology for dependable runtime detection of outliers in environmental monitoring systems, aiming to increase data quality by treating them. We propose the use of machine learning techniques to model each sensor behavior, exploiting the existence of correlated data provided by other related sensors. Using these models, along with knowledge of processed past measurements, it is possible to obtain accurate estimations of the observed environment parameters and build failure detectors that use these estimations. When a failure is detected, these estimations also allow one to correct the erroneous measurements and hence improve the overall data quality. Our methodology not only allows one to distinguish truly abnormal measurements from deviations due to complex natural phenomena, but also allows the quantification of each measurement quality, which is relevant from a dependability perspective. We apply the methodology to real datasets from a complex aquatic monitoring system, measuring temperature and salinity parameters, through which we illustrate the process for building the machine learning prediction models using a technique based on Artificial Neural Networks, denoted ANNODE ( ANN Outlier Detection ). From this application, we also observe the effectiveness of our ANNODE approach for accurate outlier detection in harsh environments. Then we validate these positive results by comparing ANNODE with state-of-the-art solutions for outlier detection. The results show that ANNODE improves existing solutions regarding accuracy of outlier detection.


2021 ◽  
Vol 11 (2) ◽  
pp. 472
Author(s):  
Hyeongmin Cho ◽  
Sangkyun Lee

Machine learning has been proven to be effective in various application areas, such as object and speech recognition on mobile systems. Since a critical key to machine learning success is the availability of large training data, many datasets are being disclosed and published online. From a data consumer or manager point of view, measuring data quality is an important first step in the learning process. We need to determine which datasets to use, update, and maintain. However, not many practical ways to measure data quality are available today, especially when it comes to large-scale high-dimensional data, such as images and videos. This paper proposes two data quality measures that can compute class separability and in-class variability, the two important aspects of data quality, for a given dataset. Classical data quality measures tend to focus only on class separability; however, we suggest that in-class variability is another important data quality factor. We provide efficient algorithms to compute our quality measures based on random projections and bootstrapping with statistical benefits on large-scale high-dimensional data. In experiments, we show that our measures are compatible with classical measures on small-scale data and can be computed much more efficiently on large-scale high-dimensional datasets.


Sign in / Sign up

Export Citation Format

Share Document