Planned seismic imaging using explicit one-way operators

Geophysics ◽  
2005 ◽  
Vol 70 (5) ◽  
pp. S101-S109 ◽  
Author(s):  
Robert J. Ferguson ◽  
Gary F. Margrave

A method is presented to compare the accuracy and computational cost of explicit one-way extrapolation operators as used in seismic imaging. For a given model, accuracy is measured in terms of lateral positioning error, and cost is calculated relative to the cost of the spatial fast Fourier transform. The result is a planned imaging scheme that achieves the greatest accuracy with respect to the velocity model for a fixed cost. To demonstrate, we use a 2D section of the EAGE/SEG salt model and assemble a suite of the most common operators. The data are imaged using each operator individually, and the results are compared to the result from the plan-based algorithm. The planned image is shown to return improved accuracy for no additional cost.

Geophysics ◽  
1994 ◽  
Vol 59 (10) ◽  
pp. 1551-1560 ◽  
Author(s):  
David N. Whitcombe ◽  
Eugene H. Murray ◽  
Laurie A. St. Aubin ◽  
Randall J. Carroll

Inconsistencies in fault positioning between overlapping 3-D seismic surveys over the northwestern part of the Endicott Field highlighted lateral positioning errors of the order of 1000 ft (330 m) in the seismic images. This large uncertainty in fault positioning placed a high and often unacceptable risk on the placement of wells. To quantify and correct for the seismic positioning error, 3-D velocity models were developed for ray‐trace modeling. The lateral positioning error maps produced revealed significant variation in the mispositioning within the Endicott Field that were mainly caused by lateral variations in permafrost thickness. These maps have been used to correct the positions of mapped features and have enabled several wells to be successfully placed close to major faults. Prior to this analysis, these wells were considered too risky to place optimally. The seismic data were 3-D poststack depth migrated with the final velocity model, producing a repositioned image that was consistent with the ray‐trace predictions. Additionally, a general enhancement of data imaging improved the interpretability and enabled the remapping and subsequent successful development of the peripheral Sag Delta North accumulation.


2001 ◽  
Vol 11 (09) ◽  
pp. 1563-1579 ◽  
Author(s):  
JIM DOUGLAS ◽  
SEONGJAI KIM

Classical alternating direction (AD) and fractional step (FS) methods for parabolic equations, based on some standard implicit time-stepping procedure such as Crank–Nicolson, can have errors associated with the AD or FS perturbations that are much larger than the errors associated with the underlying time-stepping procedure. We show that minor modifications in the AD and FS procedures can virtually eliminate the perturbation errors at an additional computational cost that is less than 10% of the cost of the original AD or FS method. Moreover, after these modifications, the AD and FS procedures produce identical approximations of the solution of the differential problem. It is also shown that the same perturbation of the Crank–Nicolson procedure can be obtained with AD and FS methods associated with the backward Euler time-stepping scheme. An application of the same concept is presented for second-order wave equations.


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. S51-S59 ◽  
Author(s):  
Dmitrii Merzlikin ◽  
Sergey Fomel

Diffraction imaging aims to emphasize small subsurface objects, such as faults, fracture swarms, and channels. Similar to classical reflection imaging, velocity analysis is crucially important for accurate diffraction imaging. Path-summation migration provides an imaging method that produces an image of the subsurface without picking a velocity model. Previous methods of path-summation imaging involve a discrete summation of the images corresponding to all possible migration velocity distributions within a predefined integration range and thus involve a significant computational cost. We have developed a direct analytical formula for path-summation imaging based on the continuous integration of the images along the velocity dimension, which reduces the cost to that of only two fast Fourier transforms. The analytic approach also enabled automatic migration velocity extraction from diffractions using a double path-summation migration framework. Synthetic and field data examples confirm the efficiency of the proposed techniques.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCA129-WCA139 ◽  
Author(s):  
Jin-Hai Zhang ◽  
Shu-Qin Wang ◽  
Zhen-Xing Yao

Computational cost is a major factor that inhibits the practical application of 3D depth migration. We have developed a fast parallel scheme to speed up 3D wave-equation depth migration on a parallel computing device, i.e., on graphics processing units (GPUs). The third-order optimized generalized-screen propagator is used to take advantage of the built-in software implementation of the fast Fourier transform. The propagator is coded as a sequence of kernels that can be called from the computer host for each frequency component. Moving the wavefield extrapolation for each depth level to the GPUs allows handling a large 3D velocity model, but this scheme can be speeded up to a limited degree over the CPU implementation because of the low-bandwidth data transfer between host and device. We have created further speedup in this extrapolation scheme by minimizing the low-bandwidth data transfer, which is done by storing the 3D velocity model and imaged data in the device memory, and reducing half the memory demand by compressing the 3D velocity model and imaged data using integer arrays instead of float arrays. By incorporating a 2D tapered function, time-shift propagator, and scaling of the inverse Fourier transform into a compact kernel, the computation time is reduced greatly. Three-dimensional impulse responses and synthetic data examples have demonstrated that the GPU-based Fourier migration typically is 25 to 40 times faster than the CPU-based implementation. It enables us to image complex media using 3D depth migration with little concern for computational cost. The macrovelocity model can be built in a much shorter turnaround time.


2014 ◽  
Vol 25 (12) ◽  
pp. 1441002 ◽  
Author(s):  
Yanbiao Gan ◽  
Aiguo Xu ◽  
Guangcai Zhang ◽  
Junqi Wang ◽  
Xijun Yu ◽  
...  

We present a highly efficient lattice Boltzmann (LB) kinetic model for thermal liquid–vapor system. Three key components are as below: (i) a discrete velocity model (DVM) by Kataoka et al. [Phys. Rev. E69, 035701(R) (2004)]; (ii) a forcing term Ii aiming to describe the interfacial stress and recover the van der Waals (VDW) equation of state (EOS) by Gonnella et al. [Phys. Rev. E76, 036703 (2007)] and (iii) a Windowed Fast Fourier Transform (WFFT) scheme and its inverse by our group [Phys. Rev. E84, 046715 (2011)] for solving the spatial derivatives, together with a second-order Runge–Kutta (RK) finite difference scheme for solving the temporal derivative in the LB equation. The model is verified and validated by well-known benchmark tests. The results recovered from the present model are well consistent with previous ones [Phys. Rev. E84, 046715 (2011)] or theoretical analysis. The usage of less discrete velocities, high-order RK algorithm and WFFT scheme with 16th-order in precision makes the model more efficient by about 10 times and more accurate than the original one.


Geophysics ◽  
2002 ◽  
Vol 67 (4) ◽  
pp. 1202-1212 ◽  
Author(s):  
Hervé Chauris ◽  
Mark S. Noble ◽  
Gilles Lambaré ◽  
Pascal Podvin

We present a new method based on migration velocity analysis (MVA) to estimate 2‐D velocity models from seismic reflection data with no assumption on reflector geometry or the background velocity field. Classical approaches using picking on common image gathers (CIGs) must consider continuous events over the whole panel. This interpretive step may be difficult—particularly for applications on real data sets. We propose to overcome the limiting factor by considering locally coherent events. A locally coherent event can be defined whenever the imaged reflectivity locally shows lateral coherency at some location in the image cube. In the prestack depth‐migrated volume obtained for an a priori velocity model, locally coherent events are picked automatically, without interpretation, and are characterized by their positions and slopes (tangent to the event). Even a single locally coherent event has information on the unknown velocity model, carried by the value of the slope measured in the CIG. The velocity is estimated by minimizing these slopes. We first introduce the cost function and explain its physical meaning. The theoretical developments lead to two equivalent expressions of the cost function: one formulated in the depth‐migrated domain on locally coherent events in CIGs and the other in the time domain. We thus establish direct links between different methods devoted to velocity estimation: migration velocity analysis using locally coherent events and slope tomography. We finally explain how to compute the gradient of the cost function using paraxial ray tracing to update the velocity model. Our method provides smooth, inverted velocity models consistent with Kirchhoff‐type migration schemes and requires neither the introduction of interfaces nor the interpretation of continuous events. As for most automatic velocity analysis methods, careful preprocessing must be applied to remove coherent noise such as multiples.


2016 ◽  
Vol 8 (1) ◽  
pp. 355-371 ◽  
Author(s):  
Gavin Ward ◽  
Dean Baker

AbstractA new model of compression in the Upper Triassic overlying the Rhyl Field has been developed for the Keys Basin, Irish Sea. This paper highlights the significance of the overburden velocity model in revealing the true structure of the field. The advent of 3D seismic and pre-stack depth migration has improved the interpreter's knowledge of complex velocity fields, such as shallow channels, salt bodies and volcanic intrusions. The huge leaps in processing power and migration algorithms have advanced the understanding of many anomalous features, but at a price: seismic imaging has always been a balance of quality against time and cost. As surveys get bigger and velocity analyses become more automated, quality control of the basic geological assumptions becomes an even more critical factor in the processing of seismic data and in the interpretation of structure. However, without knowledge of both regional and local geology, many features in the subsurface can be processed out of the seismic by relying too heavily on processing algorithms to image the structural model. Regrettably, without an integrated approach, this sometimes results in basic geological principles taking second place to technology and has contributed to hiding the structure of the Rhyl Field until recently.


2018 ◽  
Vol 15 (2) ◽  
pp. 294-301
Author(s):  
Reddy Sreenivasulu ◽  
Chalamalasetti SrinivasaRao

Drilling is a hole making process on machine components at the time of assembly work, which are identify everywhere. In precise applications, quality and accuracy play a wide role. Nowadays’ industries suffer due to the cost incurred during deburring, especially in precise assemblies such as aerospace/aircraft body structures, marine works and automobile industries. Burrs produced during drilling causes dimensional errors, jamming of parts and misalignment. Therefore, deburring operation after drilling is often required. Now, reducing burr size is a serious topic. In this study experiments are conducted by choosing various input parameters selected from previous researchers. The effect of alteration of drill geometry on thrust force and burr size of drilled hole was investigated by the Taguchi design of experiments and found an optimum combination of the most significant input parameters from ANOVA to get optimum reduction in terms of burr size by design expert software. Drill thrust influences more on burr size. The clearance angle of the drill bit causes variation in thrust. The burr height is observed in this study.  These output results are compared with the neural network software @easy NN plus. Finally, it is concluded that by increasing the number of nodes the computational cost increases and the error in nueral network decreases. Good agreement was shown between the predictive model results and the experimental responses.  


Author(s):  
James Farrow

ABSTRACT ObjectivesThe SA.NT DataLink Next Generation Linkage Management System (NGLMS) stores linked data in the form of a graph (in the computer science sense) comprised of nodes (records) and edges (record relationships or similarities). This permits efficient pre-clustering techniques based on transitive closure to form groups of records which relate to the same individual (or other selection criteria). ApproachOnly information known (or at least highly likely) to be relevant is extracted from the graph as superclusters. This operation is computationally inexpensive when the underlying information is stored as a graph and may be able to be done on-the-fly for typical clusters. More computationally intensive analysis and/or further clustering may then be performed on this smaller subgraph. Canopy clustering and using blocking used to reduce pairwise comparisons are expressions of the same type of approach. ResultsSubclusters for manual review based on transitive closure are typically computationally inexpensive enough to extract from the NGLMS that they are extracted on-demand during manual clerical review activities. There is no necessity to pre-calculate these clusters. Once extracted further analysis is undertaken on these smaller data groupings for visualisation and presentation for review and quality analysis. More computationally expensive techniques can be used at this point to prepare data for visualisation or provide hints to manual reviewers. 
Extracting high-recall groups of data records for review but providing them to reviews grouped further into high precision groups as the result of a second pass has resulted in a reduction of the time taken for clerical reviewers at SANT DataLink to manual review a group by 30–40%. The reviewers are able to manipulate whole groups of related records at once rather than individual records. ConclusionPre-clustering reduces the computational cost associated with higher order clustering and analysis algorithms. Algorithms which scale by n^2 (or more) are typical in comparison scenarios. By breaking the problem into pieces the computational cost can be reduced. Typically breaking a problem into many pieces reduces the cost in proportion to the number of pieces the problem can be broken into. This cost reduction can make techniques possible which would otherwise be computationally prohibitive.


Author(s):  
Victor H. Chávez ◽  
Adam Wasserman

In some sense, quantum mechanics solves all the problems in chemistry: The only thing one has to do is solve the Schrödinger equation for the molecules of interest. Unfortunately, the computational cost of solving this equation grows exponentially with the number of electrons and for more than ~100 electrons, it is impossible to solve it with chemical accuracy (~ 2 kcal/mol). The Kohn-Sham (KS) equations of density functional theory (DFT) allow us to reformulate the Schrödinger equation using the electronic probability density as the central variable without having to calculate the Schrödinger wave functions. The cost of solving the Kohn-Sham equations grows only as N3, where N is the number of electrons, which has led to the immense popularity of DFT in chemistry. Despite this popularity, even the most sophisticated approximations in KS-DFT result in errors that limit the use of methods based exclusively on the electronic density. By using fragment densities (as opposed to total densities) as the main variables, we discuss here how new methods can be developed that scale linearly with N while providing an appealing answer to the subtitle of the article: What is the shape of atoms in molecules?


Sign in / Sign up

Export Citation Format

Share Document