Estimation of quality factor based on peak frequency-shift method and redatuming operator: Application in real data set

Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. N1-N12 ◽  
Author(s):  
Francisco de S. Oliveira ◽  
Jose J. S. de Figueiredo ◽  
Andrei G. Oliveira ◽  
Jörg Schleicher ◽  
Iury C. S. Araújo

Quality factor estimation and correction are necessary to compensate the seismic energy dissipated during acoustic-/elastic-wave propagation in the earth. In this process, known as [Formula: see text]-filtering in the realm of seismic processing, the main goal is to improve the resolution of the seismic signal, as well as to recover part of the energy dissipated by the anelastic attenuation. We have found a way to improve [Formula: see text]-factor estimation from seismic reflection data. Our methodology is based on the combination of the peak-frequency-shift (PFS) method and the redatuming operator. Our innovation is in the way we correct traveltimes when the medium consists of many layers. In other words, the correction of the traveltime table used in the PFS method is performed using the redatuming operator. This operation, performed iteratively, allows a more accurate estimation of the [Formula: see text] factor layer by layer. Applications to synthetic and real data (Viking Graben) reveal the feasibility of our analysis.

2019 ◽  
Vol 16 (6) ◽  
pp. 1061-1070 ◽  
Author(s):  
Rómulo Sandoval ◽  
José L Paredes ◽  
Flor A Vivas

Abstract Quality factor estimation (Q estimation) of vertical seismic profile (VSP) data are necessary for the process referred to as inverse Q-filtering, which is used, in turn, to improve the resolution of seismic signals. In general, the performances of Q estimation methods, based on the standard Fourier transform, are severely degraded in the presence of heavy-tailed distributed noise. In particular, these methods require a bandwidth detection which is difficult to estimate due to instabilities caused by outliers or gross errors, leading to an incorrect Q estimation. In this paper, an improvement of the Q-factor estimation based on the peak frequency shift method is proposed, where the signal spectrum is obtained using a robust transform algorithm. More precisely, the robust transform method assumes that the perturbations that contaminate the signal of interest can be characterized as random samples following a zero-mean Laplacian distribution, leading to the weighted median as the optimal operator for determining each transform coefficient. The proposed method is validated on synthetic datasets using different levels of noise and its performance is compared to those yielded by various methods based on the standard Fourier transform. Furthermore, a non-Gaussianity test is performed in order to characterize the noise distribution in real data. From the non-Gaussianity test, it can be observed that the underlying noise is better characterized using a Laplacian statistical model, and therefore, the proposed method is a suitable approach for computing the Q factor. Finally, the proposed methodology is applied to estimate the Q factors of real VSP data.


2019 ◽  
Vol 7 (2) ◽  
pp. T255-T263 ◽  
Author(s):  
Yanli Liu ◽  
Zhenchun Li ◽  
Guoquan Yang ◽  
Qiang Liu

The quality factor ([Formula: see text]) is an important parameter for measuring the attenuation of seismic waves. Reliable [Formula: see text] estimation and stable inverse [Formula: see text] filtering are expected to improve the resolution of seismic data and deep-layer energy. Many methods of estimating [Formula: see text] are based on an individual wavelet. However, it is difficult to extract the individual wavelet precisely from seismic reflection data. To avoid this problem, we have developed a method of directly estimating [Formula: see text] from reflection data. The core of the methodology is selecting the peak-frequency points to linear fit their logarithmic spectrum and time-frequency product. Then, we calculated [Formula: see text] according to the relationship between [Formula: see text] and the optimized slope. First, to get the peak frequency points at different times, we use the generalized S transform to produce the 2D high-precision time-frequency spectrum. According to the seismic wave attenuation mechanism, the logarithmic spectrum attenuates linearly with the product of frequency and time. Thus, the second step of the method is transforming a 2D spectrum into 1D by variable substitution. In the process of transformation, we only selected the peak frequency points to participate in the fitting process, which can reduce the impact of the interference on the spectrum. Third, we obtain the optimized slope by least-squares fitting. To demonstrate the reliability of our method, we applied it to a constant [Formula: see text] model and the real data of a work area. For the real data, we calculated the [Formula: see text] curve of the seismic trace near a well and we get the high-resolution section by using stable inverse [Formula: see text] filtering. The model and real data indicate that our method is effective and reliable for estimating the [Formula: see text] value.


Geophysics ◽  
2002 ◽  
Vol 67 (4) ◽  
pp. 1213-1224 ◽  
Author(s):  
Hervé Chauris ◽  
Mark S. Noble ◽  
Gilles Lambaré ◽  
Pascal Podvin

We demonstrate a method for estimating 2‐D velocity models from synthetic and real seismic reflection data in the framework of migration velocity analysis (MVA). No assumption is required on the reflector geometry or on the unknown background velocity field, provided that the data only contain primary reflections/diffractions. In the prestack depth‐migrated volume, locations where the reflectivity exhibits local coherency are automatically picked without interpretation in two panels: common image gathers (CIGs) and common offset gathers (COGs). They are characterized by both their positions and two slopes. The velocity is estimated by minimizing all slopes picked in the CIGs. We test the applicability of the method on a real data set, showing the possibility of an efficient inversion using (1) the migration of selected CIGs and COGs, (2) automatic picking on prior uncorrelated locally coherent events, (3) efficient computation of the gradient of the cost function via paraxial ray tracing from the picked events to the surface, and (4) a gradient‐type optimization algorithm for convergence.


Geophysics ◽  
2012 ◽  
Vol 77 (3) ◽  
pp. WA149-WA156 ◽  
Author(s):  
E. Blias

Inelastic attenuation, quantified by [Formula: see text], the seismic quality factor, has considerable impact on surface seismic reflection data. A new method for interval [Formula: see text]-factor estimation using near-offset VSP data was based on an objective function minimization measuring the difference between cumulative [Formula: see text] estimates and those calculated through interval [Formula: see text]. To calculate interval [Formula: see text], we used all receiver pairs that provided reasonable [Formula: see text] values. To estimate [Formula: see text] between two receiver levels, we used the equation that links amplitudes at different levels and could provide more accurate [Formula: see text] values than the spectral-ratio method. To improve interval [Formula: see text] estimates, which rely on traveltimes, we used a high-accuracy approach in the frequency domain to determine time shifts. Application of this method to real data demonstrated reasonable correspondence between [Formula: see text] estimates and log data.


Geophysics ◽  
1996 ◽  
Vol 61 (1) ◽  
pp. 232-243 ◽  
Author(s):  
Satish C. Singh ◽  
R. W. Hobbs ◽  
D. B. Snyder

A method to process dual‐streamer data with under and over configuration is presented. The method combines the results of dephase‐sum and dephase‐subtraction methods. In the dephase methods, the response of one streamer is time shifted so that the primary arrivals on both streamers are aligned, and these responses are then summed or subtracted. The method provides a broad spectral response from dual‐streamer data and increases the signal‐to‐noise ratio by a factor of 1.5. Testing was done on synthetic data and then applied to a real data set collected by the British Institutions Reflection Profiling Syndicate (BIRPS). Its application to a deep seismic reflection data set from the British Isles shows that the reflections from the lower crust contain frequencies up to 80 Hz, suggesting that some of the lower crustal reflectors may have sharp boundaries and could be 20–30 m thick.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCB25-WCB33 ◽  
Author(s):  
Ari Tryggvason ◽  
Cedric Schmelzbach ◽  
Christopher Juhlin

We have developed a first-arrival traveltime inversion scheme that jointly solves for seismic velocities and source and receiver static-time terms. The static-time terms are included to compensate for varying time delays introduced by the near-surface low-velocity layer that is too thin to be resolved by tomography. Results on a real data set consisting of picked first-arrival times from a seismic-reflection 2D/3D experiment in a crystalline environment show that the tomography static-time terms are very similar in values and distribution to refraction-static corrections computed using standard refraction-statics software. When applied to 3D seismic-reflection data, tomography static-time terms produce similar or more coherent seismic-reflection images compared to the images using corrections from standard refraction-static software. Furthermore, the method provides a much more detailed model of the near-surface bedrock velocity than standard software when the static-time terms are included in the inversion. Low-velocity zones in this model correlate with other geologic and geophysical data, suggesting that our method results in a reliable model. In addition to generally being required in seismic-reflection imaging, static corrections are also necessary in traveltime tomography to obtain high-fidelity velocity images of the subsurface.


2015 ◽  
Author(s):  
Francisco de S. Oliveira* ◽  
José Jadsom S. de Figueiredo ◽  
Andrei G. Oliveira

2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


Sign in / Sign up

Export Citation Format

Share Document