scholarly journals An analysis of acquisition-related subsampling effects on Marchenko focusing, redatuming, and primary estimation

Geophysics ◽  
2021 ◽  
pp. 1-97
Author(s):  
Haorui Peng ◽  
Ivan Vasconcelos ◽  
Yanadet Sripanich ◽  
Lele Zhang

Marchenko methods can retrieve Green’s functions and focusing functions from single-sided reflection data and a smooth velocity model, as essential components of a redatuming process. Recent studies also indicate that a modified Marchenko scheme can reconstruct primary-only reflection responses directly from reflection data without requiring a priori model information. To provide insight into the artifacts that arise when input data are not ideally sampled, we study the effects of subsampling in both types of Marchenko methods in 2D earth and data — by analyzing the behavior of Marchenko-based results on synthetic data subsampled in sources or receivers. With a layered model, we find that for Marchenko redatuming, subsampling effects jointly depend on the choice of integration variable and the subsampling dimension, originated from the integrand gather in the multidimensional convolution process. When reflection data are subsampled in a single dimension, integrating on the other yields spatial gaps together with artifacts, whereas integrating on the subsampled dimension produces aliasing artifacts but without spatial gaps. Our complex subsalt model indicates that the subsampling may lead to very strong artifacts, which can be further complicated by having limited apertures. For Marchenko-based primary estimation (MPE), subsampling below a certain fraction of the fully sampled data can cause MPE iterations to diverge, which can be mitigated to some extent by using more robust iterative solvers, such as least-squares QR. Our results, covering redatuming and primary estimation in a range of subsampling scenarios, provide insights that can inform acquisition sampling choices as well as processing parameterization and quality control, e.g., to set up appropriate data filters and scaling to accommodate the effects of dipole fields, or to help ensuring that the data interpolation achieves the desired levels of reconstruction quality that minimize subsampling artifacts in Marchenko-derived fields and images.

Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
1988 ◽  
Vol 53 (3) ◽  
pp. 334-345 ◽  
Author(s):  
Ernest R. Kanasewich ◽  
Suhas M. Phadke

In routine seismic processing, normal moveout (NMO) corrections are performed to enhance the reflected signals on common‐depth‐point or common‐midpoint stacked sections. However, when faults are present, reflection interference from the two blocks and the diffractions from their edges hinder fault location determination. Destruction of diffraction patterns by poststack migration further inhibits proper imaging of diffracting centers. This paper presents a new technique which helps in the interpretation of diffracting edges by concentrating the signal amplitudes from discontinuous diffracting points on seismic sections. It involves application to the data of moveout and amplitude corrections appropriate to an assumed diffractor location. The maximum diffraction amplitude occurs at the location of the receiver for which the diffracting discontinuity is beneath the source‐receiver midpoint. Since the amplitudes of these diffracted signals drop very rapidly on either side of the midpoint, an appropriate amplitude correction must be applied. Also, because the diffracted signals are present on all traces, one can use all of them to obtain a stacked trace for one possible diffractor location. Repetition of this procedure for diffractors assumed to be located beneath each surface point results in the common‐fault‐ point (CFP) stacked section, which shows diffractor locations by high amplitudes. The method was tested for synthetic data with and without noise. It proves to be quite effective, but is sensitive to the velocity model used for moveout corrections. Therefore, the velocity model obtained from NMO stacking is generally used for enhancing diffractor locations by stacking. Finally, the technique was applied to a field reflection data set from an area south of Princess well in Alberta.


Geophysics ◽  
2005 ◽  
Vol 70 (6) ◽  
pp. S111-S120
Author(s):  
Fabio Rocca ◽  
Massimiliano Vassallo ◽  
Giancarlo Bernasconi

Seismic depth migration back-propagates seismic data in the correct depth position using information about the velocity of the medium. Usually, Kirchhoff summation is the preferred migration procedure for seismic-while-drilling (SWD) data because it can handle virtually any configuration of sources and receivers and one can compensate for irregular spatial sampling of the array elements (receivers and sources). Under the assumption of a depth-varying velocity model, with receivers arranged along a horizontal circumference and sources placed along the central vertical axis, we reformulate the Kirchhoff summation in the angular frequency domain. In this way, the migration procedure becomes very efficient because the migrated volume is obtained by an inverse Fourier transform of the weighted data. The algorithm is suitable for 3D SWD acquisitions when the aforementioned hypothesis holds. We show migration tests on SWD synthetic data, and we derive solutions to reduce the migration artifacts and to control aliasing. The procedure is also applied on a real 3D SWD data set. The result compares satisfactorily with the seismic stack section obtained from surface reflection data and with the results from traditional Kirchhoff migration.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Jinghuai Gao ◽  
Dehua Wang ◽  
Jigen Peng

An inverse source problem in the modified Helmholtz equation is considered. We give a Tikhonov-type regularization method and set up a theoretical frame to analyze the convergence of such method. A priori and a posteriori choice rules to find the regularization parameter are given. Numerical tests are presented to illustrate the effectiveness and stability of our proposed method.


Geophysics ◽  
1986 ◽  
Vol 51 (1) ◽  
pp. 12-19 ◽  
Author(s):  
James F. Mitchell ◽  
Richard J. Bolander

Subsurface structure can be mapped using refraction information from marine multichannel seismic data. The method uses velocities and thicknesses of shallow sedimentary rock layers computed from refraction first arrivals recorded along the streamer. A two‐step exploration scheme is described which can be set up on a personal computer and used routinely in any office. It is straightforward and requires only a basic understanding of refraction principles. Two case histories from offshore Peru exploration demonstrate the scheme. The basic scheme is: step (1) shallow sedimentary rock velocities are computed and mapped over an area. Step (2) structure is interpreted from the contoured velocity patterns. Structural highs, for instance, exhibit relatively high velocities, “retained” by buried, compacted, sedimentary rocks that are uplifted to the near‐surface. This method requires that subsurface structure be relatively shallow because the refracted waves probe to depths of one hundred to over one thousand meters, depending upon the seismic energy source, streamer length, and the subsurface velocity distribution. With this one requirement met, we used the refraction method over a wide range of sedimentary rock velocities, water depths, and seismic survey types. The method is particularly valuable because it works well in areas with poor seismic reflection data.


Geophysics ◽  
2014 ◽  
Vol 79 (3) ◽  
pp. WA107-WA115 ◽  
Author(s):  
Filippo Broggini ◽  
Roel Snieder ◽  
Kees Wapenaar

Standard imaging techniques rely on the single scattering assumption. This requires that the recorded data do not include internal multiples, i.e., waves that have bounced multiple times between reflectors before reaching the receivers at the acquisition surface. When multiple reflections are present in the data, standard imaging algorithms incorrectly image them as ghost reflectors. These artifacts can mislead interpreters in locating potential hydrocarbon reservoirs. Recently, we introduced a new approach for retrieving the Green’s function recorded at the acquisition surface due to a virtual source located at depth. We refer to this approach as data-driven wavefield focusing. Additionally, after applying source-receiver reciprocity, this approach allowed us to decompose the Green’s function at a virtual receiver at depth in its downgoing and upgoing components. These wavefields were then used to create a ghost-free image of the medium with either crosscorrelation or multidimensional deconvolution, presenting an advantage over standard prestack migration. We tested the robustness of our approach when an erroneous background velocity model is used to estimate the first-arriving waves, which are a required input for the data-driven wavefield focusing process. We tested the new method with a numerical example based on a modification of the Amoco model.


2021 ◽  
Vol 8 (1) ◽  
pp. 205395172110135
Author(s):  
Florian Jaton

This theoretical paper considers the morality of machine learning algorithms and systems in the light of the biases that ground their correctness. It begins by presenting biases not as a priori negative entities but as contingent external referents—often gathered in benchmarked repositories called ground-truth datasets—that define what needs to be learned and allow for performance measures. I then argue that ground-truth datasets and their concomitant practices—that fundamentally involve establishing biases to enable learning procedures—can be described by their respective morality, here defined as the more or less accounted experience of hesitation when faced with what pragmatist philosopher William James called “genuine options”—that is, choices to be made in the heat of the moment that engage different possible futures. I then stress three constitutive dimensions of this pragmatist morality, as far as ground-truthing practices are concerned: (I) the definition of the problem to be solved (problematization), (II) the identification of the data to be collected and set up (databasing), and (III) the qualification of the targets to be learned (labeling). I finally suggest that this three-dimensional conceptual space can be used to map machine learning algorithmic projects in terms of the morality of their respective and constitutive ground-truthing practices. Such techno-moral graphs may, in turn, serve as equipment for greater governance of machine learning algorithms and systems.


Geophysics ◽  
2021 ◽  
pp. 1-59
Author(s):  
Evert Slob ◽  
Lele Zhang ◽  
Eric Verschuur

Marchenko multiple elimination schemes are able to attenuate all internal multiple reflections in acoustic reflection data. These can be implemented with and without compensation for two-way transmission effects in the resulting primary reflection dataset. The methods are fully automated and run without human intervention, but require the data to be properly sampled and pre-processed. Even when several primary reflections are invisible in the data because they are masked by overlapping primaries, such as in the resonant wedge model, all missing primary reflections are restored and recovered with the proper amplitudes. Investigating the amplitudes in the primary reflections after multiple elimination with and without compensation for transmission effects shows that transmission effects are properly accounted for in a constant velocity model. When the layer thickness is one quarter of the wavelength at the dominant frequency of the source wavelet, the methods cease to work properly. Full wavefield migration relies on a velocity model and runs a non-linear inversion to obtain a reflectivity model which results in the migration image. The primary reflections that are masked by interference with multiples in the resonant wedge model, are not recovered. In this case, minimizing the data misfit function leads to the incorrect reflector model even though the data fit is optimal. This method has much lower demands on data sampling than the multiple elimination schemes, but is prone to get stuck in a local minimum even when the correct velocity model is available. A hybrid method that exploits the strengths of each of these methods could be worth investigating.


2021 ◽  
pp. 1-33
Author(s):  
Ozan Kaya ◽  
Gokce Burak Taglioglu ◽  
Seniz Ertugrul

Abstract In recent years, robotic applications have been improved for better object manipulation and collaboration with human. With this motivation, the detection of objects has been studied with serial elastic parallel gripper by simple touching in case of no visual data available. A series elastic gripper, capable of detecting geometric properties of objects is designed using only elastic elements and absolute encoders instead of tactile or force/torque sensors. The external force calculation is achieved by employing an estimation algorithm. Different objects are selected for trials for recognition. A Deep Neural Network model is trained by synthetic data extracted from STL file of selected objects . For experimental set-up, the series elastic parallel gripper is mounted on a Staubli RX160 robot arm and objects are placed in pre-determined locations in the workspace. All objects are successfully recognized using the gripper, force estimation and the DNN model. The best DNN model capable of recognizing different objects with the average prediction value ranging from 71% to 98%. Hence the proposed design of gripper and the algorithm achieved the recognition of selected objects without need for additional force/torque or tactile sensors.


Sign in / Sign up

Export Citation Format

Share Document