Offset-continuation stacking: Theory and proof of concept

Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. V387-V401 ◽  
Author(s):  
Tiago A. Coimbra ◽  
Amélia Novais ◽  
Jörg Schleicher

The offset-continuation operation (OCO) is a seismic configuration transform designed to simulate a seismic section, as if obtained with a certain source-receiver offset using the data measured with another offset. Based on this operation, we have introduced the OCO stack, which is a multiparameter stacking technique that transforms 2D/2.5D prestack multicoverage data into a stacked common-offset (CO) section. Similarly to common-midpoint and common-reflection-surface stacks, the OCO stack does not rely on an a priori velocity model but provided velocity information itself. Because OCO is dependent on the velocity model used in the process, the method can be combined with trial-stacking techniques for a set of models, thus allowing for the extraction of velocity information. The algorithm consists of data stacking along so-called OCO trajectories, which approximate the common-reflection-point trajectory, i.e., the position of a reflection event in the multicoverage data as a function of source-receiver offset in dependence on the medium velocity and the local event slope. These trajectories are the ray-theoretical solutions to the OCO image-wave equation, which describes the continuous transformation of a CO reflection event from one offset to another. Stacking along trial OCO trajectories for different values of average velocity and local event slope allows us to determine horizon-based optimal parameter pairs and a final stacked section at arbitrary offset. Synthetic examples demonstrate that the OCO stack works as predicted, almost completely removing random noise added to the data and successfully recovering the reflection events.

Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. S229-S238 ◽  
Author(s):  
Martina Glöckner ◽  
Sergius Dell ◽  
Benjamin Schwarz ◽  
Claudia Vanelle ◽  
Dirk Gajewski

To obtain an image of the earth’s subsurface, time-imaging methods can be applied because they are reasonably fast, are less sensitive to velocity model errors than depth-imaging methods, and are usually easy to parallelize. A powerful tool for time imaging consists of a series of prestack time migrations and demigrations. We have applied multiparameter stacking techniques to obtain an initial time-migration velocity model. The velocity model building proposed here is based on the kinematic wavefield attributes of the common-reflection surface (CRS) method. A subsequent refinement of the velocities uses a coherence filter that is based on a predetermined threshold, followed by an interpolation and smoothing. Then, we perform a migration deconvolution to obtain the final time-migrated image. The migration deconvolution consists of one iteration of least-squares migration with an estimated Hessian. We estimate the Hessian by nonstationary matching filters, i.e., in a data-driven fashion. The model building uses the framework of the CRS, and the migration deconvolution is fully automated. Therefore, minimal user interaction is required to carry out the velocity model refinement and the image update. We apply the velocity refinement and migration deconvolution approaches to complex synthetic and field data.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. V137-V148 ◽  
Author(s):  
Pierre Turquais ◽  
Endrias G. Asgedom ◽  
Walter Söllner

We have addressed the seismic data denoising problem, in which the noise is random and has an unknown spatiotemporally varying variance. In seismic data processing, random noise is often attenuated using transform-based methods. The success of these methods in denoising depends on the ability of the transform to efficiently describe the signal features in the data. Fixed transforms (e.g., wavelets, curvelets) do not adapt to the data and might fail to efficiently describe complex morphologies in the seismic data. Alternatively, dictionary learning methods adapt to the local morphology of the data and provide state-of-the-art denoising results. However, conventional denoising by dictionary learning requires a priori information on the noise variance, and it encounters difficulties when applied for denoising seismic data in which the noise variance is varying in space or time. We have developed a coherence-constrained dictionary learning (CDL) method for denoising that does not require any a priori information related to the signal or noise. To denoise a given window of a seismic section using CDL, overlapping small 2D patches are extracted and a dictionary of patch-sized signals is trained to learn the elementary features embedded in the seismic signal. For each patch, using the learned dictionary, a sparse optimization problem is solved, and a sparse approximation of the patch is computed to attenuate the random noise. Unlike conventional dictionary learning, the sparsity of the approximation is constrained based on coherence such that it does not need a priori noise variance or signal sparsity information and is still optimal to filter out Gaussian random noise. The denoising performance of the CDL method is validated using synthetic and field data examples, and it is compared with the K-SVD and FX-Decon denoising. We found that CDL gives better denoising results than K-SVD and FX-Decon for removing noise when the variance varies in space or time.


2015 ◽  
Author(s):  
Jonathan M. Koller ◽  
M. Jonathan Vachon ◽  
G. Larry Bretthorst ◽  
Kevin J. Black

ABSTRACTWe recently described rapid quantitative pharmacodynamic imaging, a novel method for estimating sensitivity of a biological system to a drug. We tested its accuracy in simulated biological signals with varying receptor sensitivity and varying levels of random noise, and presented initial proof-of-concept data from functional MRI (fMRI) studies in primate brain. However, the initial simulation testing used a simple iterative approach to estimate pharmacokinetic-pharmacodynamic (PKPD) parameters, an approach that was computationally efficient but returned parameters only from a small, discrete set of values chosen a priori.Here we revisit the simulation testing using a Bayesian method to estimate the PKPD parameters. This improved accuracy compared to our previous method, and noise without intentional signal was never interpreted as signal. We also reanalyze the fMRI proof-of-concept data. The success with the simulated data, and with the limited fMRI data, is a necessary first step toward further testing of rapid quantitative pharmacodynamic imaging.


1974 ◽  
Vol 14 (1) ◽  
pp. 107
Author(s):  
John Wardell

Since the introduction of the common depth point method of seismic reflection shooting, we have seen a continued increase in the multiplicity of subsurface coverage, to the point where nowadays a large proportion of offshore shooting uses a 48 fold 48 trace configuration. Of the many benefits obtained from this multiplicity of coverage, the attenuation of multiple reflections during the common depth point stacking process is one of the most important.Examinations of theoretical response curves for multiple attenuation in common depth point stacking shows that although increased multiplicity does give improved multiple attenuation, this improvement occurs at higher and higher frequencies and residual moveouts (of the multiples) as the multiplicity continues to increase. For multiplicities greater than 12, the improvement is at relatively high frequencies and residual moveouts, while there is no significant improvement for the lower frequencies of multiples with smaller residual moveouts, which unfortunately are those most likely to remain visible after the stacking process.The simple process of zeroing, or muting, certain selected traces (mostly the shorter offset traces) before stacking can give an average 6 to 9 decibels improvement over a wide range of the low frequency and residual moveout part of the stack response, with 9-15 decibels improvement over parts of this range. The cost of this improvement is an increase in random noise level of 1-2 decibels. With digital processing methods, it is easy to zero the necessary traces over selected portions of the seismic section if so desired.The process does not require a detailed knowledge of the multiple residual moveouts, but can be used on a routine basis in areas where strong multiples are a problem, and a high stacking multiplicity is being used.


Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 97-109 ◽  
Author(s):  
Rainer Jäger ◽  
Jürgen Mann ◽  
German Höcht ◽  
Peter Hubral

The common‐reflection‐surface stack provides a zero‐offset simulation from seismic multicoverage reflection data. Whereas conventional reflection imaging methods (e.g. the NMO/dip moveout/stack or prestack migration) require a sufficiently accurate macrovelocity model to yield appropriate results, the common‐reflection‐surface (CRS) stack does not depend on a macrovelocity model. We apply the CRS stack to a 2-D synthetic seismic multicoverage dataset. We show that it not only provides a high‐quality simulated zero‐offset section but also three important kinematic wavefield attribute sections, which can be used to derive the 2-D macrovelocity model. We compare the multicoverage‐data‐derived attributes with the model‐derived attributes computed by forward modeling. We thus confirm the validity of the theory and of the data‐derived attributes. For 2-D acquisition, the CRS stack leads to a stacking surface depending on three search parameters. The optimum stacking surface needs to be determined for each point of the simulated zero‐offset section. For a given primary reflection, these are the emergence angle α of the zero‐offset ray, as well as two radii of wavefront curvatures [Formula: see text] and [Formula: see text]. They all are associated with two hypothetical waves: the so‐called normal wave and the normal‐incidence‐point wave. We also address the problem of determining an optimal parameter triplet (α, [Formula: see text], [Formula: see text]) in order to construct the sample value (i.e., the CRS stack value) for each point in the desired simulated zero‐offset section. This optimal triplet is expected to determine for each point the best stacking surface that can be fitted to the multicoverage primary reflection events. To make the CRS stack attractive in terms of computational costs, a suitable strategy is described to determine the optimal parameter triplets for all points of the simulated zero‐offset section. For the implementation of the CRS stack, we make use of the hyperbolic second‐order Taylor expansion of the stacking surface. This representation is not only suitable to handle irregular multicoverage acquisition geometries but also enables us to introduce simple and efficient search strategies for the parameter triple. In specific subsets of the multicoverage data (e.g., in the common‐midpoint gathers or the zero‐offset section), the chosen representation only depends on one or two independent parameters, respectively.


Author(s):  
William Demopoulos ◽  
Peter Clark

This article is organized around logicism's answers to the following questions: What is the basis for our knowledge of the infinity of the numbers? How is arithmetic applicable to reality? Why is reasoning by induction justified? Although there are, as is seen in this article, important differences, the common thread that runs through all three of the authors discussed in this article their opposition to the Kantian thesis that reflection on reasoning with mere concepts (i.e., without attention to intuitions formed a priori) can never succeed in providing satisfactory answers to these three questions. This description of the core of the view differs from more usual formulations which represent the opposition to Kant as an opposition to the contention that mathematics in general, and arithmetic in particular, are synthetic a priori rather than analytic.


BMC Cancer ◽  
2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Cheng KKF ◽  
S. A. Mitchell ◽  
N. Chan ◽  
E. Ang ◽  
W. Tam ◽  
...  

Abstract Background The aim of this study was to translate and linguistically validate the U.S. National Cancer Institute’s Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE™) into Simplified Chinese for use in Singapore. Methods All 124 items of the English source PRO-CTCAE item library were translated into Simplified Chinese using internationally established translation procedures. Two rounds of cognitive interviews were conducted with 96 cancer patients undergoing adjuvant treatment to determine if the translations adequately captured the PRO-CTCAE source concepts, and to evaluate comprehension, clarity and ease of judgement. Interview probes addressed the 78 PRO-CTCAE symptom terms (e.g. fatigue), as well as the attributes (e.g. severity), response choices, and phrasing of ‘at its worst’. Items that met the a priori threshold of ≥20% of participants with comprehension difficulties were considered for rephrasing and retesting. Items where < 20% of the sample experienced comprehension difficulties were also considered for rephrasing if better phrasing options were available. Results A majority of PRO-CTCAE-Simplified Chinese items were well comprehended by participants in Round 1. One item posed difficulties in ≥20% and was revised. Two items presented difficulties in < 20% but were revised as there were preferred alternative phrasings. Twenty-four items presented difficulties in < 10% of respondents. Of these, eleven items were revised to an alternative preferred phrasing, four items were revised to include synonyms. Revised items were tested in Round 2 and demonstrated satisfactory comprehension. Conclusions PRO-CTCAE-Simplified Chinese has been successfully developed and linguistically validated in a sample of cancer patients residing in Singapore.


Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. S47-S61 ◽  
Author(s):  
Paul Sava ◽  
Oleg Poliannikov

The fidelity of depth seismic imaging depends on the accuracy of the velocity models used for wavefield reconstruction. Models can be decomposed in two components, corresponding to large-scale and small-scale variations. In practice, the large-scale velocity model component can be estimated with high accuracy using repeated migration/tomography cycles, but the small-scale component cannot. When the earth has significant small-scale velocity components, wavefield reconstruction does not completely describe the recorded data, and migrated images are perturbed by artifacts. There are two possible ways to address this problem: (1) improve wavefield reconstruction by estimating more accurate velocity models and image using conventional techniques (e.g., wavefield crosscorrelation) or (2) reconstruct wavefields with conventional methods using the known background velocity model but improve the imaging condition to alleviate the artifacts caused by the imprecise reconstruction. Wedescribe the unknown component of the velocity model as a random function with local spatial correlations. Imaging data perturbed by such random variations is characterized by statistical instability, i.e., various wavefield components image at wrong locations that depend on the actual realization of the random model. Statistical stability can be achieved by preprocessing the reconstructed wavefields prior to the imaging condition. We use Wigner distribution functions to attenuate the random noise present in the reconstructed wavefields, parameterized as a function of image coordinates. Wavefield filtering using Wigner distribution functions and conventional imaging can be lumped together into a new form of imaging condition that we call an interferometric imaging condition because of its similarity to concepts from recent work on interferometry. The interferometric imaging condition can be formulated both for zero-offset and for multioffset data, leading to robust, efficient imaging procedures that effectively attenuate imaging artifacts caused by unknown velocity models.


2013 ◽  
Vol 8 (1) ◽  
pp. 42-54
Author(s):  
Camille Carbonnaux

Since the 1990s, European judicial and normative institutions have paid particular attention to the competitive practices of public undertakings. Consequently, their regime is governed by a significant number of rules pursuing objectives appearing, a priori, contradictory. In fact, public undertakings may experience difficulties in their management. In this context, an approach of public competition law through the prism of fair competition can be very useful. Regarding the uniformity of its judgment, fair competition appears as an objective capable of coordinating rules and overcoming their contradictions. It thereby offers a global and coherent reading plan of all the legal translations of the European competitive order being of some practical importance. In illuminating the common features of the different legal aspects of competition, we can easily switch from one to the other. It therefore makes the European approach to competition more accessible and understandable. Furthermore, and most importantly, it leads to identifying legal opportunities and threats in a cross-disciplinary way. So, from a “Law & Management” perspective, it appears to be a precious tool for the management of public undertakings. Key words: European competition law, public undertakings, fair competition, “Management & law”.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R165-R174 ◽  
Author(s):  
Marcelo Jorge Luz Mesquita ◽  
João Carlos Ribeiro Cruz ◽  
German Garabito Callapino

Estimation of an accurate velocity macromodel is an important step in seismic imaging. We have developed an approach based on coherence measurements and finite-offset (FO) beam stacking. The algorithm is an FO common-reflection-surface tomography, which aims to determine the best layered depth-velocity model by finding the model that maximizes a semblance objective function calculated from the amplitudes in common-midpoint (CMP) gathers stacked over a predetermined aperture. We develop the subsurface velocity model with a stack of layers separated by smooth interfaces. The algorithm is applied layer by layer from the top downward in four steps per layer. First, by automatic or manual picking, we estimate the reflection times of events that describe the interfaces in a time-migrated section. Second, we convert these times to depth using the velocity model via application of Dix’s formula and the image rays to the events. Third, by using ray tracing, we calculate kinematic parameters along the central ray and build a paraxial FO traveltime approximation for the FO common-reflection-surface method. Finally, starting from CMP gathers, we calculate the semblance of the selected events using this paraxial traveltime approximation. After repeating this algorithm for all selected CMP gathers, we use the mean semblance values as an objective function for the target layer. When this coherence measure is maximized, the model is accepted and the process is completed. Otherwise, the process restarts from step two with the updated velocity model. Because the inverse problem we are solving is nonlinear, we use very fast simulated annealing to search the velocity parameters in the target layers. We test the method on synthetic and real data sets to study its use and advantages.


Sign in / Sign up

Export Citation Format

Share Document