Seismic trace interpolation in the F-X domain

Geophysics ◽  
1991 ◽  
Vol 56 (6) ◽  
pp. 785-794 ◽  
Author(s):  
S. Spitz

Interpolation of seismic traces is an effective means of improving migration when the data set exhibits spatial aliasing. A major difficulty of standard interpolation methods is that they depend on the degree of reliability with which the various geological events can be separated. In this respect, a multichannel interpolation method is described which requires neither a priori knowledge of the directions of lateral coherence of the events, nor estimation of these directions. The method is based on the fact that linear events present in a section made of equally spaced traces may be interpolated exactly, regardless of the original spatial interval, without any attempt to determine their true dips. The predictability of linear events in the f-x domain allows the missing traces to be expressed as the output of a linear system, the input of which consists of the recorded traces. The interpolation operator is obtained by solving a set of linear equations whose coefficients depend only on the spectrum of the spatial prediction filter defined by the recorded traces. Synthetic examples show that this method is insensitive to random noise and that it correctly handles curvatures and lateral amplitude variations. Assessment of the method with a real data set shows that the interpolation yields an improved migrated section.

Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


2021 ◽  
Vol 14 (1) ◽  
pp. 71-88
Author(s):  
Adane Nega Tarekegn ◽  
Tamir Anteneh Alemu ◽  
Alemu Kumlachew Tegegne

Tuberculosis (TB) remains a global health concern. It commonly spreads through the air and attacks low immune bodies. TB is the most common and known health problem in low and middle-income countries. Genetic programming (GP) is a machine learning model for discovering useful relationships among the variables in complex clinical data. It is more appropriate in a circumstance when the form of the solution model is unknown a priori. The main objective of this study is to develop a model that can detect positive cases of TB suspected patients using genetic programming approach. In this paper, Genetic Programming (GP) is exploited to identify the presence of positive cases of tuberculosis from the real data set of TB suspects and hospitalized patients. First, the dataset is pre-processed, and target variables are identified using cluster analysis. This data-driven cluster analysis identifies two distinct clusters of patients, representing TB positive and TB negative. Then, GP is trained using the training datasets to construct a prediction model and tested with a separate new dataset. With the 30 runs, the median performance of GP on test data was good (sensitivity=0.78, specificity=0.95, accuracy=0.89, AUC=0.91). We find that GP shows better performance in predicting TB compared to other machine learning models. The study demonstrates that the GP model might be used to support clinicians to screen TB patients.


Geophysics ◽  
2002 ◽  
Vol 67 (4) ◽  
pp. 1232-1239 ◽  
Author(s):  
Yanghua Wang

Seismic trace interpolation is implemented as a 2‐D (x, y) spatial prediction, performed separately on each frequency (f) slice. This so‐called f‐x‐y domain trace interpolation method is based on the relation that the linear prediction (LP) operator estimated at a given frequency may be used to predict data at a higher frequency but a smaller trace spacing. The relationship originally given for thef‐x domain trace interpolation is successfully extended to the f‐x‐y domain. The extension is achieved by masking the data samples selectively from the input frequency slice to design the LP operators. Two interpolation algorithms using the full‐step and the fractional‐step predictions, respectively, are developed. Both methods use an all‐azimuth prediction in the x‐y domain, but the fractional‐step prediction method is computationally more efficient. While the interpolation method can be applied to a common‐offset cube of 3‐D seismic, it can also be applied to 2‐D seismic traces for prestack data processing. Synthetic and real data examples demonstrate the capability of the interpolation method.


2011 ◽  
Vol 6 (1) ◽  
pp. 65-75 ◽  
Author(s):  
Fredrik Thuring

AbstractA method is presented for identifying an expected profitable set of customers, to offer them an additional insurance product, by estimating a customer specific latent risk profile, for the additional product, by using the customer specific available data for an existing insurance product of the specific customer. For the purpose, a multivariate credibility estimator is considered and we investigate the effect of assuming that one (of two) insurance products is inactive (without available claims information) when estimating the latent risk profile. Instead, available customer specific claims information from the active existing insurance product is used to estimate the risk profile and thereafter assess whether or not to include a specific customer in an expected profitable set of customers. The method is tested using a large real data set from a Danish insurance company and it is shown that sets of customers, with up to 36% less claims than a priori expected, are produced as a result of the method. It is therefore argued that the proposed method could be considered, by an insurance company, when cross-selling insurance products to existing customers.


Geophysics ◽  
2000 ◽  
Vol 65 (5) ◽  
pp. 1641-1653 ◽  
Author(s):  
Necati Gülünay

A common practice in random noise reduction for 2-D data is to use pseudononcausal (PNC) 1-D prediction filters at each temporal frequency. A 1-D PNC filter is a filter that is forced to be two sided by placing a conjugate‐reversed version of a 1-D causal filter in front of itself with a zero between the two. For 3-D data, a similar practice is to solve for two 2-D (causal) one‐quadrant filters at each frequency slice. A 2-D PNC filter is formed by putting a conjugate flipped version of each quadrant filter in a quadrant opposite itself. The center sample of a 2-D PNC filter is zero. This paper suggests the use of 1-D and 2-D noncausal (NC) prediction filters instead of PNC filters for random noise attenuation, where an NC filter is a two‐sided filter solved from one set of normal equations. The number of negative and positive lags in the NC filter is the same. The center sample of the filter is zero. The NC prediction filters are more center loaded than PNC filters. They are conjugate symmetric as PNC filters. Also, NC filters are less sensitive than PNC filters to the size of the gate used in their derivation. They can handle amplitude variations along dip directions better than PNC filters. While a PNC prediction filter suppresses more random noise, it damages more signal. On the other hand, NC prediction filters preserve more of the signal and reject less noise for the same total filter length. For high S/N ratio data, a 2-D NC prediction filter preserves geologic features that do not vary in one of the spatial dimensions. In‐line and cross‐line vertical faults are also well preserved with such filters. When faults are obliquely oriented, the filter coefficients adapt to the fault. Spectral properties of PNC and NC filters are very similar.


2019 ◽  
Vol 486 (2) ◽  
pp. 2116-2128 ◽  
Author(s):  
Yvette C Perrott ◽  
Kamran Javid ◽  
Pedro Carvalho ◽  
Patrick J Elwood ◽  
Michael P Hobson ◽  
...  

ABSTRACT We develop a Bayesian method of analysing Sunyaev–Zel’dovich measurements of galaxy clusters obtained from the Arcminute Microkelvin Imager (AMI) radio interferometer system and from the Planck satellite, using a joint likelihood function for the data from both instruments. Our method is applicable to any combination of Planck data with interferometric data from one or more arrays. We apply the analysis to simulated clusters and find that when the cluster pressure profile is known a priori, the joint data set provides precise and accurate constraints on the cluster parameters, removing the need for external information to reduce the parameter degeneracy. When the pressure profile deviates from that assumed for the fit, the constraints become biased. Allowing the pressure profile shape parameters to vary in the analysis allows an unbiased recovery of the integrated cluster signal and produces constraints on some shape parameters, depending on the angular size of the cluster. When applied to real data from Planck-detected cluster PSZ2 G063.80+11.42, our method resolves the discrepancy between the AMI and Planck Y-estimates and usefully constrains the gas pressure profile shape parameters at intermediate and large radii.


Geophysics ◽  
2021 ◽  
pp. 1-60
Author(s):  
Mohammad Mahdi Abedi ◽  
David Pardo

Normal moveout (NMO) correction is a fundamental step in seismic data processing. It consists of mapping seismic data from recorded traveltimes to corresponding zero-offset times. This process produces wavelet stretching as an undesired byproduct. We address the NMO stretching problem with two methods: 1) an exact stretch-free NMO correction that prevents the stretching of primary reflections, and 2) an approximate post-NMO stretch correction. Our stretch-free NMO produces parallel moveout trajectories for primary reflections. Our post-NMO stretch correction calculates the moveout of stretched wavelets as a function of offset. Both methods are based on the generalized moveout approximation and are suitable for application in complex anisotropic or heterogeneous environments. We use new moveout equations and modify the original parameter functions to be constant over the primary reflections, and then interpolate the seismogram amplitudes at the calculated traveltimes. For fast and automatic modification of the parameter functions, we use deep learning. We design a deep neural network (DNN) using convolutional layers and residual blocks. To train the DNN, we generate a set of 40,000 synthetic NMO corrected common midpoint gathers and the corresponding desired outputs of the DNN. The data set is generated using different velocity profiles, wavelets, and offset vectors, and includes multiples, ground roll, and band-limited random noise. The simplicity of the DNN task –a 1D identification of primary reflections– improves the generalization in practice. We use the trained DNN and show successful applications of our stretch-correction method on synthetic and different real data sets.


Geophysics ◽  
1999 ◽  
Vol 64 (5) ◽  
pp. 1461-1467 ◽  
Author(s):  
Milton J. Porsani

A method to perform seismic trace interpolation known as the Spitz method handles spatially aliased events. The Spitz method uses the unit‐step prediction filter to estimate data spaced at Δx/2. The missing data are obtained by solving a complex linear system of equations whose unknowns are the coefficients at the interpolated location. We attack this problem by introducing a half‐step prediction filter that makes trace interpolation significantly more efficient and easier for implementation. A complex half‐step prediction filter at frequency f/2 is computed in the least‐squares sense to predict odd data components from even ones. At the frequency f, the prediction operator is shrunk and convolved with the input data spaced at Δx to predict data at Δx/2 directly. Instead of solving two systems of linear equations, as proposed by Spitz, only a system for the half‐step prediction filter has to be solved. Numerical examples using a marine seismic common‐midpoint (CMP) gather and a poststack seismic section were used to illustrate the new interpolation method.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S11-S18 ◽  
Author(s):  
Juefu Wang ◽  
Mauricio D. Sacchi

We propose a new scheme for high-resolution amplitude-variation-with-ray-parameter (AVP) imaging that uses nonquadratic regularization. We pose migration as an inverse problem and propose a cost function that uses a priori information about common-image gathers (CIGs). In particular, we introduce two regularization constraints: smoothness along the offset-ray-parameter axis and sparseness in depth. The two-step regularization yields high-resolution CIGs with robust estimates of AVP. We use an iterative reweighted least-squares conjugate gradient algorithm to minimize the cost function of the problem. We test the algorithm with synthetic data (a wedge model and the Marmousi data set) and a real data set (Erskine area, Alberta). Tests show our method helps to enhance the vertical resolution of CIGs and improves amplitude accuracy along the ray-parameter direction.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Sign in / Sign up

Export Citation Format

Share Document