Seismic attribute selection for machine-learning-based facies analysis

Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. O17-O35
Author(s):  
Jie Qi ◽  
Bo Zhang ◽  
Bin Lyu ◽  
Kurt Marfurt

Interpreters face two main challenges in seismic facies analysis. The first challenge is to define, or “label,” the facies of interest. The second challenge is to select a suite of attributes that can differentiate a target facies from the background reflectivity. Our key objective is to determine which seismic attributes can best differentiate one class of chaotic seismic facies from another using modern machine-learning technology. Although simple 1D histograms provide a list of candidate attributes, they do not provide insight into the optimum number or combination of attributes. To address this limitation, we have conducted an exhaustive search whereby we represent the target and background training facies by high-dimensional Gaussian mixture models (GMMs) for each potential attribute combination. The first step is to choose candidate attributes that may be able to differentiate chaotic mass-transport deposits and salt diapirs from the more conformal, coherent background reflectors. The second step is to draw polygons around the target and background facies to provide the labeled data to be represented by GMMs. Maximizing the distance between all GMM facies pairs provides the optimum number and combination of attributes. We use generative topographic mapping to represent the high-dimensional attribute data by a lower dimensional 2D manifold. Each labeled facies provides a probability density function on the manifold that can be compared to the probability density function of each voxel, providing the likelihood that a given voxel is a member of each of the facies. Our first example maps chaotic seismic facies associated with the development of salt diapirs and minibasins. Our second example successfully delineates karst collapse underlying a shale resource play from north Texas.

2018 ◽  
Author(s):  
Mingxu Hu ◽  
Hongkun Yu ◽  
Kai Gu ◽  
Kunpeng Wang ◽  
Siyuan Ren ◽  
...  

AbstractElectron cryo-microscopy (cryoEM) is now a powerful tool in determining atomic structures of biological macromolecules under nearly natural conditions. The major task of single-particle cryoEM is to estimate a set of parameters for each input particle image to reconstruct the three-dimensional structure of the macromolecules. As future large-scale applications require increasingly higher resolution and automation, robust high-dimensional parameter estimation algorithms need to be developed in the presence of various image qualities. In this paper, we introduced a particle-filter algorithm for cryoEM, which was a sequential Monte Carlo method for robust and fast high-dimensional parameter estimation. The cryoEM parameter estimation problem was described by a probability density function of the estimated parameters. The particle filter uses a set of random and weighted support points to represent such a probability density function. The statistical properties of the support points not only enhance the parameter estimation with self-adaptive accuracy but also provide the belief of estimated parameters, which is essential for the reconstruction phase. The implementation of these features showed strong tolerance to bad particles and enabled robust defocus refinement, demonstrated by the remarkable resolution improvement at the atomic level.


2018 ◽  
Vol 611 ◽  
pp. A53 ◽  
Author(s):  
S. Jamal ◽  
V. Le Brun ◽  
O. Le Fèvre ◽  
D. Vibert ◽  
A. Schmitt ◽  
...  

Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate.Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function.Methods. We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms.Results. As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy ~58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy ~98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels.Conclusions. Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for spectroscopic redshift measurements. This newly-defined method is very promising for next-generation large spectroscopic surveys from the ground and in space, such as Euclid and WFIRST.


Sign in / Sign up

Export Citation Format

Share Document