Simultaneous multivintage time-shift estimation

Geophysics ◽  
2009 ◽  
Vol 74 (5) ◽  
pp. V109-V121 ◽  
Author(s):  
Ehsan Zabihi Naeini ◽  
Henning Hoeber ◽  
Gordon Poole ◽  
Hamid R. Siahkoohi

Time-shift estimation is a key step in seismic time-lapse processing as well as in many other signal-processing applications. We consider the time-shift problem in the setting of multiple repeat surveys that must be aligned consistently. We introduce an optimized least-squares method based on the Taylor expansion for estimating two-vintage time shifts and compare it to crosscorrelation. The superiority of the proposed algorithm is demonstrated with synthetic data and residual time-lapse matching on a U. K. continental shelf data set. We then discuss the shortcomings of cascaded time alignment in multiple repeat monitor surveys and propose an approach to estimate simultaneous multivintage time shifts that uses a constrained least-squares technique combined with elements of network theory. The resulting time shifts are consistent across all vintages in a least-squares sense, improving overall alignment when compared to the classical flow of alignment in a cascaded manner. The method surpasses the cascaded approach, as noted with sample synthetic and three-vintage U. K. continental shelf time-lapse data sets.

Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. M29-M41 ◽  
Author(s):  
Mahdi H. Almutlaq ◽  
Gary F. Margrave

We evaluated the concept of surface-consistent matching filters for processing time-lapse seismic data, in which matching filters are convolutional filters that minimize the sum-squared error between two signals. Because in the Fourier domain a matching filter is the spectral ratio of the two signals, we extended the well-known surface-consistent hypothesis such that the data term is a trace-by-trace spectral ratio of two data sets instead of only one (i.e., surface-consistent deconvolution). To avoid unstable division of spectra, we computed the spectral ratios in the time domain by first designing trace-sequential, least-squares matching filters, then Fourier transforming them. A subsequent least-squares solution then factored the trace-sequential matching filters into four operators: two surface-consistent (source and receiver) and two subsurface-consistent (offset and midpoint). We evaluated a time-lapse synthetic data set with nonrepeatable acquisition parameters, complex near-surface geology, and a variable subsurface reservoir layer. We computed the four-operator surface-consistent matching filters from two surveys, baseline and monitor, then applied these matching filters to the monitor survey to match it to the baseline survey over a temporal window where changes were not expected. This algorithm significantly reduced the effect of most of the nonrepeatable parameters, such as differences in source strength, receiver coupling, wavelet bandwidth and phase, and static shifts. We computed the normalized root mean square difference on raw stacked data (baseline and monitor) and obtained a mean value of 70%. This value was significantly reduced after applying the 4C surface-consistent matching filters to about 13.6% computed from final stacks.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M1-M13 ◽  
Author(s):  
Yichuan Wang ◽  
Igor B. Morozov

For seismic monitoring injected fluids during enhanced oil recovery or geologic [Formula: see text] sequestration, it is useful to measure time-lapse (TL) variations of acoustic impedance (AI). AI gives direct connections to the mechanical and fluid-related properties of the reservoir or [Formula: see text] storage site; however, evaluation of its subtle TL variations is complicated by the low-frequency and scaling uncertainties of this attribute. We have developed three enhancements of TL AI analysis to resolve these issues. First, following waveform calibration (cross-equalization) of the monitor seismic data sets to the baseline one, the reflectivity difference was evaluated from the attributes measured during the calibration. Second, a robust approach to AI inversion was applied to the baseline data set, based on calibration of the records by using the well-log data and spatially variant stacking and interval velocities derived during seismic data processing. This inversion method is straightforward and does not require subjective selections of parameterization and regularization schemes. Unlike joint or statistical inverse approaches, this method does not require prior models and produces accurate fitting of the observed reflectivity. Third, the TL AI difference is obtained directly from the baseline AI and reflectivity difference but without the uncertainty-prone subtraction of AI volumes from different seismic vintages. The above approaches are applied to TL data sets from the Weyburn [Formula: see text] sequestration project in southern Saskatchewan, Canada. High-quality baseline and TL AI-difference volumes are obtained. TL variations within the reservoir zone are observed in the calibration time-shift, reflectivity-difference, and AI-difference images, which are interpreted as being related to the [Formula: see text] injection.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. S87-S100 ◽  
Author(s):  
Hao Hu ◽  
Yike Liu ◽  
Yingcai Zheng ◽  
Xuejian Liu ◽  
Huiyi Lu

Least-squares migration (LSM) can be effective to mitigate the limitation of finite-seismic acquisition, balance the subsurface illumination, and improve the spatial resolution of the image, but it requires iterations of migration and demigration to obtain the desired subsurface reflectivity model. The computational efficiency and accuracy of migration and demigration operators are crucial for applying the algorithm. We have developed a test of the feasibility of using the Gaussian beam as the wavefield extrapolating operator for the LSM, denoted as least-squares Gaussian beam migration. Our method combines the advantages of the LSM and the efficiency of the Gaussian beam propagator. Our numerical evaluations, including two synthetic data sets and one marine field data set, illustrate that the proposed approach could be used to obtain amplitude-balanced images and to broaden the bandwidth of the migrated images in particular for the low-wavenumber components.


2010 ◽  
Vol 62 (4) ◽  
pp. 875-882 ◽  
Author(s):  
A. Dembélé ◽  
J.-L. Bertrand-Krajewski ◽  
B. Barillon

Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.


2020 ◽  
Vol 633 ◽  
pp. A46
Author(s):  
L. Siltala ◽  
M. Granvik

Context. The bulk density of an asteroid informs us about its interior structure and composition. To constrain the bulk density, one needs an estimated mass of the asteroid. The mass is estimated by analyzing an asteroid’s gravitational interaction with another object, such as another asteroid during a close encounter. An estimate for the mass has typically been obtained with linearized least-squares methods, despite the fact that this family of methods is not able to properly describe non-Gaussian parameter distributions. In addition, the uncertainties reported for asteroid masses in the literature are sometimes inconsistent with each other and are suspected to be unrealistically low. Aims. We aim to present a Markov-chain Monte Carlo (MCMC) algorithm for the asteroid mass estimation problem based on asteroid-asteroid close encounters. We verify that our algorithm works correctly by applying it to synthetic data sets. We use astrometry available through the Minor Planet Center to estimate masses for a select few example cases and compare our results with results reported in the literature. Methods. Our mass-estimation method is based on the robust adaptive Metropolis algorithm that has been implemented into the OpenOrb asteroid orbit computation software. Our method has the built-in capability to analyze multiple perturbing asteroids and test asteroids simultaneously. Results. We find that our mass estimates for the synthetic data sets are fully consistent with the ground truth. The nominal masses for real example cases typically agree with the literature but tend to have greater uncertainties than what is reported in recent literature. Possible reasons for this include different astrometric data sets and weights, different test asteroids, different force models or different algorithms. For (16) Psyche, the target of NASA’s Psyche mission, our maximum likelihood mass is approximately 55% of what is reported in the literature. Such a low mass would imply that the bulk density is significantly lower than previously expected and thus disagrees with the theory of (16) Psyche being the metallic core of a protoplanet. We do, however, note that masses reported in recent literature remain within our 3-sigma limits. Results. The new MCMC mass-estimation algorithm performs as expected, but a rigorous comparison with results from a least-squares algorithm with the exact same data set remains to be done. The matters of uncertainties in comparison with other algorithms and correlations of observations also warrant further investigation.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Author(s):  
Sauro Mocetti

Abstract This paper contributes to the growing number of studies on intergenerational mobility by providing a measure of earnings elasticity for Italy. The absence of an appropriate data set is overcome by adopting the two-sample two-stage least squares method. The analysis, based on the Survey of Household Income and Wealth, shows that intergenerational mobility is lower in Italy than it is in other developed countries. We also examine the reasons why the long-term labor market success of children is related to that of their fathers.


Author(s):  
Danlei Xu ◽  
Lan Du ◽  
Hongwei Liu ◽  
Penghui Wang

A Bayesian classifier for sparsity-promoting feature selection is developed in this paper, where a set of nonlinear mappings for the original data is performed as a pre-processing step. The linear classification model with such mappings from the original input space to a nonlinear transformation space can not only construct the nonlinear classification boundary, but also realize the feature selection for the original data. A zero-mean Gaussian prior with Gamma precision and a finite approximation of Beta process prior are used to promote sparsity in the utilization of features and nonlinear mappings in our model, respectively. We derive the Variational Bayesian (VB) inference algorithm for the proposed linear classifier. Experimental results based on the synthetic data set, measured radar data set, high-dimensional gene expression data set, and several benchmark data sets demonstrate the aggressive and robust feature selection capability and comparable classification accuracy of our method comparing with some other existing classifiers.


Sign in / Sign up

Export Citation Format

Share Document