Simulation of complete seismic surveys for evaluation of experiment design and processing

Geophysics ◽  
1996 ◽  
Vol 61 (2) ◽  
pp. 496-508 ◽  
Author(s):  
Turgut Özdenvar ◽  
George A. McMechan ◽  
Preston Chaney

Synthesis of complete seismic survey data sets allows analysis and optimization of all stages in an acquisition/processing sequence. The characteristics of available survey designs, parameter choices, and processing algorithms may be evaluated prior to field acquisition to produce a composite system in which all stages have compatible performance; this maximizes the cost effectiveness for a given level of accuracy, or for targets with specific characteristics. Data sets synthesized for three salt structures provide representative comparisons of time and depth migration, post‐stack and prestack processing, and illustrate effects of varying recording aperture and shot spacing, iterative focusing analysis, and the interaction of migration algorithms with recording aperture. A final example demonstrates successful simulation of both 2-D acquisition and processing of a real data line over a salt pod in the Gulf of Mexico.

Geophysics ◽  
2005 ◽  
Vol 70 (5) ◽  
pp. E21-E28 ◽  
Author(s):  
Yu Zhang ◽  
James Sun ◽  
Carl Notfors ◽  
Samuel H. Gray ◽  
Leon Chernis ◽  
...  

For 3D seismic imaging in structurally complex areas, the use of migration by wavefield extrapolation has become widespread. By its very nature, this family of migration methods operates on data sets that satisfy a wave equation in the context of a single, physically realizable field experiment, such as a common-shot record. However, common-shot migration of data recorded over dipping structures requires a migration aperture much larger than the recording aperture, resulting in extra computations. A different type of wave-equation record, the response to a linear or planar source, can be synthesized from all the common-shot records. Synthesizing these records from common-shot records involves slant-stack processing, or applying delays to the various shots; we call these records delayed-shot records. Delayed-shot records don't suffer from the aperture problems of common-shot records since their recording aperture is the length of the seismic survey. Consequently, delayed-shot records hold potential for efficient, accurate imaging by wavefield extrapolation. We present a formulation of delayed-shot migration in 2D and 3D (linear sources) and its application to 3D marine streamer data. This formulation includes a discussion of sampling theory issues associated with the formation of delayed-shot records. For typical marine data, 2D and 3D delayed-shot migration can be significantly more efficient than common-shot migration. Synthetic and real data examples show that delayed-shot migration produces images comparable to those from common-shot migration.


Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1470-1484 ◽  
Author(s):  
Alastair M. Swanston ◽  
Peter B. Flemings ◽  
Joseph T. Comisky ◽  
Kevin D. Best

Two orthogonal preproduction seismic surveys and a regional seismic survey acquired after eight years of production from the Bullwinkle field (Green Canyon 65, Gulf of Mexico) reveal extraordinary seismic differences attributed to production‐induced changes in rock and fluid properties. Amplitude reduction (of up to 71%) occurs where production and log data show that water has replaced hydrocarbons as the oil–water contact moved upward. Separate normalizations of these surveys demonstrate that time‐lapse results are improved by using seismic surveys acquired in similar orientations; also, clearer difference images are obtained from comparing lower‐frequency data sets. Superior stratigraphic illumination in the dip‐oriented survey relative to the strike‐oriented surveys results in nongeological amplitude differences. This documents the danger of using dissimilar baseline and monitor surveys for time‐lapse studies.


2010 ◽  
Vol 50 (2) ◽  
pp. 723
Author(s):  
Sergey Birdus ◽  
Erika Angerer ◽  
Iftikhar Abassi

Processing of multi and wide-azimuth seismic data faces some new challenges, and one of them is depth-velocity modelling and imaging with azimuthal velocity anisotropy. Analysis of multi-azimuth data very often reveals noticeable fluctuations in moveout between different acquisition directions. They can be caused by several factors: real azimuthal interval velocity anisotropy associated with quasi-vertical fractures or present day stress field within the sediments; short-wavelength velocity heterogeneities in the overburden; TTI (or VTI) anisotropy in the overburden; or, random distortions due to noise, multiples, irregularities in the acquisition geometry, etcetera. In order to build a velocity model for multi-azimuth pre-stack depth migration (MAZ PSDM) taking into account observed azimuthal anisotropy, we need to recognise, separate and estimate all the effects listed above during iterative depth-velocity modelling. Analysis of seismic data from a full azimuth 3D seismic land survey revealed the presence of strong spatially variable azimuthal velocity anisotropy that had to be taken into consideration. Using real data examples we discuss major steps in depth processing workflow that took such anisotropy into account: residual moveout estimation in azimuth sectors; separation of different effects causing apparent azimuthal anisotropy (see A–D above); iterative depth-velocity modelling with azimuthal anisotropy; and, subsequent MAZ anisotropic PSDM. The presented workflow solved problems with azimuthal anisotropy in our multi-azimuth dataset. Some of the lessons learned during this MAZ project are relevant to every standard narrow azimuth seismic survey recorded in complex geological settings.


1995 ◽  
Vol 35 (1) ◽  
pp. 65
Author(s):  
S.I. Mackie ◽  
C.M. Gumley

The Dirkala Field is located in the southern Murta Block of PEL's 5 and 6 in the southern Cooper and Eromanga Basins. Excellent oil produc­tion from a single reservoir sandstone in the Juras­sic Birkhead Formation in Dirkala-1 had indicated a potentially larger resource than could be mapped volumetrically. The hypothesis that the resource was stratigraphically trapped led to the need to define the fluvial sand reservoir seismically and thereby prepare for future development.A small (16 km2) 3D seismic survey was acquired over the area in December 1992. The project was designed not only to evaluate the limits of the Birkhead sand but also to evaluate the cost effi­ciency of recording such small 3D surveys in the basin.Interpretation of the data set integrated with seismic modelling and seismic attribute analysis delineated a thin Birkhead fluvial channel sand reservoir. Geological pay mapping matched volu­metric estimates from production performance data. Structural mapping showed Dirkala-1 to be opti­mally placed and that no further development drill­ing was justifiable.Seismic characteristics comparable with those of the Dirkala-1 Birkhead reservoir were noted in another area of the survey beyond field limits. This led to the proposal to drill an exploration well, Dirkala South-1, which discovered a new oil pool in the Birkhead Formation. A post-well audit of the pre-drill modelling confirmed that the seismic response could be used to determine the presence of the Birkhead channel sand reservoir.The acquisition of the Dirkala-3D seismic survey demonstrated the feasibility of conducting small 3D seismic surveys to identify subtle stratigraphically trapped Eromanga Basin accumulations at lower cost and risk than appraisal/development drilling based on 2D seismic data.


Geophysics ◽  
2002 ◽  
Vol 67 (3) ◽  
pp. 830-839 ◽  
Author(s):  
Stéphane Gesbert

This paper addresses the issue of the sensitivity of 3‐D prestack depth migration (PSDM) with respect to the acquisition geometry of 3‐D seismic surveys. Using the theoretical framework of PSDM, I show how acquisition‐related imaging artifacts—the acquisition footprints—can arise. I then show how the acquisition footprint can be suppressed in two steps by (1) partitioning the 3‐D survey into minimal data sets, each to be migrated separately, and (2) applying a robust variable‐geometry PSDM quadrature. The validity of the method is demonstrated on synthetic parallel and antiparallel multistreamer data and cross‐spread data. The proposed two‐step solution can play an important role in projects where amplitude integrity and fidelity are paramount, e.g., quantitative interpretation and time‐lapse surveying. The concept of minimal data also fills a gap in understanding the relation between acquisition and imaging.


2021 ◽  
Vol 12 ◽  
Author(s):  
Li Xu ◽  
Yin Xu ◽  
Tong Xue ◽  
Xinyu Zhang ◽  
Jin Li

Motivation: The emergence of single-cell RNA sequencing (scRNA-seq) technology has paved the way for measuring RNA levels at single-cell resolution to study precise biological functions. However, the presence of a large number of missing values in its data will affect downstream analysis. This paper presents AdImpute: an imputation method based on semi-supervised autoencoders. The method uses another imputation method (DrImpute is used as an example) to fill the results as imputation weights of the autoencoder, and applies the cost function with imputation weights to learn the latent information in the data to achieve more accurate imputation.Results: As shown in clustering experiments with the simulated data sets and the real data sets, AdImpute is more accurate than other four publicly available scRNA-seq imputation methods, and minimally modifies the biologically silent genes. Overall, AdImpute is an accurate and robust imputation method.


Geophysics ◽  
2018 ◽  
Vol 83 (1) ◽  
pp. O1-O13 ◽  
Author(s):  
Anders U. Waldeland ◽  
Hao Zhao ◽  
Jorge H. Faccipieri ◽  
Anne H. Schistad Solberg ◽  
Leiv-J. Gelius

The common-reflection-surface (CRS) method offers a stack with higher signal-to-noise ratio at the cost of a time-consuming semblance search to obtain the stacking parameters. We have developed a fast method for extracting the CRS parameters using local slope and curvature. We estimate the slope and curvature with the gradient structure tensor and quadratic structure tensor on stacked data. This is done under the assumption that a stacking velocity is already available. Our method was compared with an existing slope-based method, in which the slope is extracted from prestack data. An experiment on synthetic data shows that our method has increased robustness against noise compared with the existing method. When applied to two real data sets, our method achieves accuracy comparable with the pragmatic and full semblance searches. Our method has the advantage of being approximately two and four orders of magnitude faster than the semblance searches.


2018 ◽  
Author(s):  
Ricardo Guedes ◽  
Vasco Furtado ◽  
Tarcísio Pequeno ◽  
Joel Rodrigues

UNSTRUCTURED The article investigates policies for helping emergency-centre authorities for dispatching resources aimed at reducing goals such as response time, the number of unattended calls, the attending of priority calls, and the cost of displacement of vehicles. Pareto Set is shown to be the appropriated way to support the representation of policies of dispatch since it naturally fits the challenges of multi-objective optimization. By means of the concept of Pareto dominance a set with objectives may be ordered in a way that guides the dispatch of resources. Instead of manually trying to identify the best dispatching strategy, a multi-objective evolutionary algorithm coupled with an Emergency Call Simulator uncovers automatically the best approximation of the optimal Pareto Set that would be the responsible for indicating the importance of each objective and consequently the order of attendance of the calls. The scenario of validation is a big metropolis in Brazil using one-year of real data from 911 calls. Comparisons with traditional policies proposed in the literature are done as well as other innovative policies inspired from different domains as computer science and operational research. The results show that strategy of ranking the calls from a Pareto Set discovered by the evolutionary method is a good option because it has the second best (lowest) waiting time, serves almost 100% of priority calls, is the second most economical, and is the second in attendance of calls. That is to say, it is a strategy in which the four dimensions are considered without major impairment to any of them.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Sign in / Sign up

Export Citation Format

Share Document