Estimation of the depth-domain seismic wavelet based on velocity substitution and a generalized seismic wavelet model

Geophysics ◽  
2021 ◽  
pp. 1-50
Author(s):  
Jie Zhang ◽  
Xuehua Chen ◽  
Wei Jiang ◽  
Yunfei Liu ◽  
He Xu

Depth-domain seismic wavelet estimation is the essential foundation for depth-imaged data inversion, which is increasingly used for hydrocarbon reservoir characterization in geophysical prospecting. The seismic wavelet in the depth domain stretches with the medium velocity increase and compresses with the medium velocity decrease. The commonly used convolution model cannot be directly used to estimate depth-domain seismic wavelets due to velocity-dependent wavelet variations. We develop a separate parameter estimation method for estimating depth-domain seismic wavelets from poststack depth-domain seismic and well log data. This method is based on the velocity substitution and depth-domain generalized seismic wavelet model defined by the fractional derivative and reference wavenumber. Velocity substitution allows wavelet estimation with the convolution model in the constant-velocity depth domain. The depth-domain generalized seismic wavelet model allows for a simple workflow that estimates the depth-domain wavelet by estimating two wavelet model parameters. Additionally, this simple workflow does not need to perform searches for the optimal regularization parameter and wavelet length, which are time-consuming in least-squares-based methods. The limited numerical search ranges of the two wavelet model parameters can easily be calculated using the constant phase and peak wavenumber of the depth-domain seismic data. Our method is verified using synthetic and real seismic data and further compared with least-squares-based methods. The results indicate that the proposed method is effective and stable even for data with a low S/N.

2021 ◽  
Vol 19 (3) ◽  
pp. 125-138
Author(s):  
S. Inichinbia ◽  
A.L. Ahmed

This paper presents a rigorous but pragmatic and data driven approach to the science of making seismic-to-well ties. This pragmatic  approach is consistent with the interpreter’s desire to correlate geology to seismic information by the use of the convolution model,  together with least squares matching techniques and statistical measures of fit and accuracy to match the seismic data to the well data. Three wells available on the field provided a chance to estimate the wavelet (both in terms of shape and timing) directly from the seismic and also to ascertain the level of confidence that should be placed in the wavelet. The reflections were interpreted clearly as hard sand at H1000 and soft sand at H4000. A synthetic seismogram was constructed and matched to a real seismic trace and features from the well are correlated to the seismic data. The prime concept in constructing the synthetic is the convolution model, which represents a seismic reflection signal as a sequence of interfering reflection pulses of different amplitudes and polarity but all of the same shape. This pulse shape is the seismic wavelet which is formally, the reflection waveform returned by an isolated reflector of unit strength at the target  depth. The wavelets are near zero phase. The goal and the idea behind these seismic-to-well ties was to obtain information on the sediments, calibration of seismic processing parameters, correlation of formation tops and seismic reflectors, and the derivation of a  wavelet for seismic inversion among others. Three seismic-to-well ties were done using three partial angle stacks and basically two formation tops were correlated. Keywords: seismic, well logs, tie, synthetics, angle stacks, correlation,


Geophysics ◽  
1990 ◽  
Vol 55 (7) ◽  
pp. 902-913 ◽  
Author(s):  
Arthur B. Weglein ◽  
Bruce G. Secrest

A new and general wave theoretical wavelet estimation method is derived. Knowing the seismic wavelet is important both for processing seismic data and for modeling the seismic response. To obtain the wavelet, both statistical (e.g., Wiener‐Levinson) and deterministic (matching surface seismic to well‐log data) methods are generally used. In the marine case, a far‐field signature is often obtained with a deep‐towed hydrophone. The statistical methods do not allow obtaining the phase of the wavelet, whereas the deterministic method obviously requires data from a well. The deep‐towed hydrophone requires that the water be deep enough for the hydrophone to be in the far field and in addition that the reflections from the water bottom and structure do not corrupt the measured wavelet. None of the methods address the source array pattern, which is important for amplitude‐versus‐offset (AVO) studies. This paper presents a method of calculating the total wavelet, including the phase and source‐array pattern. When the source locations are specified, the method predicts the source spectrum. When the source is completely unknown (discrete and/or continuously distributed) the method predicts the wavefield due to this source. The method is in principle exact and yet no information about the properties of the earth is required. In addition, the theory allows either an acoustic wavelet (marine) or an elastic wavelet (land), so the wavelet is consistent with the earth model to be used in processing the data. To accomplish this, the method requires a new data collection procedure. It requires that the field and its normal derivative be measured on a surface. The procedure allows the multidimensional earth properties to be arbitrary and acts like a filter to eliminate the scattered energy from the wavelet calculation. The elastic wavelet estimation theory applied in this method may allow a true land wavelet to be obtained. Along with the derivation of the procedure, we present analytic and synthetic examples.


2020 ◽  
Vol 39 (5) ◽  
pp. 346-352
Author(s):  
Mohamed G. El-Behiry ◽  
Mohamed S. Al Araby ◽  
Ramy Z. Ragab

Seismic wavelets are dynamic components that result in a seismic trace when convolved with reflectivity series. The seismic wavelet is described by three components: amplitude, frequency, and phase. Amplitude and frequency are considered static because they mainly affect the appearance of a seismic event. Phase can have a large effect on seismic appearance by changing the way it describes the subsurface. Knowing the wavelet properties of certain seismic data facilitates the process of interpretation by providing an understanding of the appearance of regional geologic markers and hydrocarbon-bearing formation behavior. The process through which seismic data wavelets are understood is called seismic well tie. Seismic well tie is the first step in calibrating seismic data in terms of polarity and phase. It ensures that the seismic data are descriptive to regional markers, well markers, and discoveries (if they exist). The step connects well data to seismic data to ensure that the seismic correctly describes well results at the well location. It then extends the understanding of seismic behavior to the rest of the area covered by the seismic data. Good seismic well tie will greatly reduce uncertainties accompanying seismic interpretation. One important outcome of the seismic well tie process is understanding the phase of seismic data, which affects how seismic data will reflect a known geologic marker or hydrocarbon-bearing zone. This understanding can be useful in quantifying discoveries attached to seismic anomalies and extending knowledge from the well location to the rest of the area covered by seismic data.


2017 ◽  
Vol 60 (2) ◽  
pp. 191-202
Author(s):  
FENG Wei ◽  
HU Tian-Yue ◽  
YAO Feng-Chang ◽  
ZHANG Yan ◽  
Cui Yong-Fu ◽  
...  

Author(s):  
С.И. Носков

Описываются свойства методов оценивания параметров регрессионных моделей - наименьших квадратов, модулей, антиробастного, а также их применения для решения конкретных практических проблем. При этом метод наименьших модулей не реагирует на аномальные наблюдения выборки, метод антиробастного оценивания сильно отклоняет линию регрессии в их направлении, метод наименьших квадратов занимает промежуточное положение. Показано, что если целью построения модели является проведение на ее основе многовариантных прогнозных расчетов значений зависимой переменной, то выбор метода численной идентификации параметров модели следует производить на основе анализа характера выбросов. Если есть основания полагать, что подобные им ситуации могут иметь место в будущем, следует выбрать метод антиробастного оценивания, в противном же случае - метод наименьших модулей. Построена регрессионная модель грузооборота Красноярской железной дороги на основе применения всех трех методов оценивания параметров. Проведен анализ причин, имеющих место в 2010 году в ситуации резкого падения величины грузооборота, которая вполне может характеризоваться как аномальное наблюдение в данных. Сделаны рекомендации по выбору метода оценивания параметров в этом случае The article describes the properties of methods for estimating the parameters of regression models - least squares, moduli, anti-robust - as well as their application for solving specific practical problems. At the same time, the method of least modules does not respond to anomalous observations of the sample, the method of anti-robust estimation strongly deviates the regression line in their direction, the method of least squares occupies an intermediate position. I show that if the purpose of constructing a model is to carry out multivariate predictive calculations of the values of the dependent variable on its basis, then the choice of a method for the numerical identification of model parameters should be based on an analysis of the nature of emissions. If there is a reason to believe that similar situations may occur in the future, the anti-robust estimation method should be chosen, otherwise - the least modulus method. I built a regression model of the freight turnover of the Krasnoyarsk railway on the basis of the application of all three methods of parameter estimation. I carried out the analysis of the reasons for the situation of a sharp drop in the value of cargo turnover in 2010, which may well be characterized as anomalous observation in the data. I give recommendations on the choice of the parameter estimation method in this case


2013 ◽  
Vol 90 ◽  
pp. 92-95 ◽  
Author(s):  
Jing Zheng ◽  
Su-ping Peng ◽  
Ming-chu Liu ◽  
Zhe Liang

2019 ◽  
Vol 80 (3) ◽  
pp. 421-445 ◽  
Author(s):  
Dexin Shi ◽  
Alberto Maydeu-Olivares

We examined the effect of estimation methods, maximum likelihood (ML), unweighted least squares (ULS), and diagonally weighted least squares (DWLS), on three population SEM (structural equation modeling) fit indices: the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardized root mean square residual (SRMR). We considered different types and levels of misspecification in factor analysis models: misspecified dimensionality, omitting cross-loadings, and ignoring residual correlations. Estimation methods had substantial impacts on the RMSEA and CFI so that different cutoff values need to be employed for different estimators. In contrast, SRMR is robust to the method used to estimate the model parameters. The same criterion can be applied at the population level when using the SRMR to evaluate model fit, regardless of the choice of estimation method.


2011 ◽  
Vol 23 (1) ◽  
pp. 284-301 ◽  
Author(s):  
Taiji Suzuki ◽  
Masashi Sugiyama

Accurately evaluating statistical independence among random variables is a key element of independent component analysis (ICA). In this letter, we employ a squared-loss variant of mutual information as an independence measure and give its estimation method. Our basic idea is to estimate the ratio of probability densities directly without going through density estimation, thereby avoiding the difficult task of density estimation. In this density ratio approach, a natural cross-validation procedure is available for hyperparameter selection. Thus, all tuning parameters such as the kernel width or the regularization parameter can be objectively optimized. This is an advantage over recently developed kernel-based independence measures and is a highly useful property in unsupervised learning problems such as ICA. Based on this novel independence measure, we develop an ICA algorithm, named least-squares independent component analysis.


Geophysics ◽  
2020 ◽  
pp. 1-47
Author(s):  
George Ghon ◽  
Dario Grana ◽  
Eugene C. Rankey ◽  
Gregor T. Baechle ◽  
Florian Bleibinhaus ◽  
...  

We present a case study of geophysical reservoir characterization where we use elastic inversion and probabilistic prediction to predict 9 carbonate lithofacies and the associated porosity distribution. The study focuses on an isolated carbonate platform of middle Miocene age, offshore Sarawak in Malaysia, which has been partly dolomitized — a process that increased porosity and permeability of the prolific gas reservoir. The 9 lithofacies are defined from one reference core and include a range of lithologies and pore types, covering limestone and dolomitized limestone, each with vuggy varieties, as well as sucrosic and crystalline dolomites with intercrystalline porosity, and also argillaceous limestones, and shales. To predict lithofacies and porosity from geophysical data, we adopt a probabilistic algorithm that employs Bayesian theory with an analytical solution for conditional means and covariances of posterior probabilities, assuming a Gaussian mixture model. The inversion is a 2-step process, first solving for elastic model parameters P- and S-wave velocities and density from 2 partial seismic stacks. Subsequently, lithofacies and porosity are predicted from the elastic parameters in the borehole and across a 2-D inline. The final result is a model that consists of the pointwise posterior distributions of facies and porosity at each location where seismic data are available. The facies posterior distribution represents the facies proportions estimated from seismic data, whereas the porosity distribution represents the the probability density function at each location. These distributions provide the most likely model and its associated uncertainty for geological interpretations.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R221-R234 ◽  
Author(s):  
Yuhan Sui ◽  
Jianwei Ma

Seismic wavelet estimation and deconvolution are essential for high-resolution seismic processing. Because of the influence of absorption and scattering, the frequency and phase of the seismic wavelet change with time during wave propagation, leading to a time-varying seismic wavelet. To obtain reflectivity coefficients with more accurate relative amplitudes, we should compute a nonstationary deconvolution of this seismogram, which might be difficult to solve. We have extended sparse spike deconvolution via Toeplitz-sparse matrix factorization to a nonstationary sparse spike deconvolution approach with anelastic attenuation. We do this by separating our model into subproblems in each of which the wavelet estimation problem is solved by the classic sparse optimization algorithms. We find numerical examples that illustrate the parameter setting, noisy seismogram, and the estimation error of the [Formula: see text] value to validate the effectiveness of our extended approach. More importantly, taking advantage of the high accuracy of the estimated [Formula: see text] value, we obtain better performance than with the stationary Toeplitz-sparse spike deconvolution approach in real seismic data.


Sign in / Sign up

Export Citation Format

Share Document