Inversion of normal incidence seismograms

Geophysics ◽  
1982 ◽  
Vol 47 (5) ◽  
pp. 757-770 ◽  
Author(s):  
A. Bamberger ◽  
G. Chavent ◽  
Ch. Hemon ◽  
P. Lailly

The well‐known instability of Kunetz’s (1963) inversion algorithm can be explained by the progressive manner in which the calculations are done (descending from the surface) and by the fact that completely different impedances can yield indistinguishable synthetic seismograms. Those difficulties can be overcome by using an iterative algorithm for the inversion of the one‐dimensional (1-D) wave equation, together with a stabilizing constraint on the sums of the jumps of the desired impedance. For computational efficiency, the synthetic seismogram is computed by the method of characteristics, and the gradient of the error criterion is computed by optimal control techniques (adjoint state equation). The numerical results on simulated data confirm the expected stability of the algorithm in the presence of measurement noise (tests include noise levels of 50 percent). The inversion of two field sections demonstrates the practical feasibility of the method and the importance of taking into account all internal as well as external multiple reflections. Reflection coefficients obtained by this method show an excellent agreement with well‐log data in a case where standard estimation techniques [deconvolution of common‐depth‐point (CDP) stacked and normal‐moveout (NMO) correction section] failed.

Geophysics ◽  
1986 ◽  
Vol 51 (2) ◽  
pp. 383-395 ◽  
Author(s):  
Kenneth P. Whittall ◽  
D. W. Oldenburg

We present a flexible, one‐dimensional magnetotelluric (MT) inversion algorithm based on inverse scattering theory. The algorithm easily generates different classes of conductivity‐depth profiles so the interpreter may choose models that satisfy any external geologic or geophysical constraints. The two‐stage process is based on the work of Weidelt. The first stage uses the MT frequency‐domain data to construct an impulse response analogous to a deconvolved seismogram with or without a free‐surface assumption. Since this is a linear problem (a Laplace transform), numerous impulse responses may be generated by linear inverse techniques which handle data errors robustly. We minimize four norms of the impulse response in order to construct varied classes of limited‐structure earth models. We choose such models to prevent overinterpreting the limited number of inaccurate MT observations. The second stage of the algorithm maps the impulse response to the conductivity model using any of four Fredholm integral equations of the second kind. We evaluate the performance of each of the four mappings and recommend the Burridge and Gopinath‐Sondhi formulations. We also evaluate three approximations to the second‐stage equations. These approximations are fast and easy to implement on small computers. We find the one which includes first‐order multiple reflections to be the most accurate.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 384
Author(s):  
Rocío Hernández-Sanjaime ◽  
Martín González ◽  
Antonio Peñalver ◽  
Jose J. López-Espín

The presence of unaccounted heterogeneity in simultaneous equation models (SEMs) is frequently problematic in many real-life applications. Under the usual assumption of homogeneity, the model can be seriously misspecified, and it can potentially induce an important bias in the parameter estimates. This paper focuses on SEMs in which data are heterogeneous and tend to form clustering structures in the endogenous-variable dataset. Because the identification of different clusters is not straightforward, a two-step strategy that first forms groups among the endogenous observations and then uses the standard simultaneous equation scheme is provided. Methodologically, the proposed approach is based on a variational Bayes learning algorithm and does not need to be executed for varying numbers of groups in order to identify the one that adequately fits the data. We describe the statistical theory, evaluate the performance of the suggested algorithm by using simulated data, and apply the two-step method to a macroeconomic problem.


Genetics ◽  
2003 ◽  
Vol 165 (4) ◽  
pp. 2269-2282
Author(s):  
D Mester ◽  
Y Ronin ◽  
D Minkov ◽  
E Nevo ◽  
A Korol

Abstract This article is devoted to the problem of ordering in linkage groups with many dozens or even hundreds of markers. The ordering problem belongs to the field of discrete optimization on a set of all possible orders, amounting to n!/2 for n loci; hence it is considered an NP-hard problem. Several authors attempted to employ the methods developed in the well-known traveling salesman problem (TSP) for multilocus ordering, using the assumption that for a set of linked loci the true order will be the one that minimizes the total length of the linkage group. A novel, fast, and reliable algorithm developed for the TSP and based on evolution-strategy discrete optimization was applied in this study for multilocus ordering on the basis of pairwise recombination frequencies. The quality of derived maps under various complications (dominant vs. codominant markers, marker misclassification, negative and positive interference, and missing data) was analyzed using simulated data with ∼50-400 markers. High performance of the employed algorithm allows systematic treatment of the problem of verification of the obtained multilocus orders on the basis of computing-intensive bootstrap and/or jackknife approaches for detecting and removing questionable marker scores, thereby stabilizing the resulting maps. Parallel calculation technology can easily be adopted for further acceleration of the proposed algorithm. Real data analysis (on maize chromosome 1 with 230 markers) is provided to illustrate the proposed methodology.


Geophysics ◽  
2013 ◽  
Vol 78 (6) ◽  
pp. R249-R257 ◽  
Author(s):  
Maokun Li ◽  
James Rickett ◽  
Aria Abubakar

We found a data calibration scheme for frequency-domain full-waveform inversion (FWI). The scheme is based on the variable projection technique. With this scheme, the FWI algorithm can incorporate the data calibration procedure into the inversion process without introducing additional unknown parameters. The calibration variable for each frequency is computed using a minimum norm solution between the measured and simulated data. This process is directly included in the data misfit cost function. Therefore, the inversion algorithm becomes source independent. Moreover, because all the data points are considered in the calibration process, this scheme increases the robustness of the algorithm. Numerical tests determined that the FWI algorithm can reconstruct velocity distributions accurately without the source waveform information.


Geophysics ◽  
2006 ◽  
Vol 71 (2) ◽  
pp. W1-W14 ◽  
Author(s):  
Einar Iversen

Inspired by recent ray-theoretical developments, the theory of normal-incidence rays is generalized to accommodate P- and S-waves in layered isotropic and anisotropic media. The calculation of the three main factors contributing to the two-way amplitude — i.e., geometric spreading, phase shift from caustics, and accumulated reflection/transmission coefficients — is formulated as a recursive process in the upward direction of the normal-incidence rays. This step-by-step approach makes it possible to implement zero-offset amplitude modeling as an efficient one-way wavefront construction process. For the purpose of upward dynamic ray tracing, the one-way eigensolution matrix is introduced, having as minors the paraxial ray-tracing matrices for the wavefronts of two hypothetical waves, referred to by Hubral as the normal-incidence point (NIP) wave and the normal wave. Dynamic ray tracing expressed in terms of the one-way eigensolution matrix has two advantages: The formulas for geometric spreading, phase shift from caustics, and Fresnel zone matrix become particularly simple, and the amplitude and Fresnel zone matrix can be calculated without explicit knowledge of the interface curvatures at the point of normal-incidence reflection.


2018 ◽  
Vol 8 (9) ◽  
pp. 1674
Author(s):  
Wengang Chen ◽  
Wenzheng Xiu ◽  
Jin Shen ◽  
Wenwen Zhang ◽  
Min Xu ◽  
...  

By using different weights to deal with the autocorrelation function data of every delay time period, the information utilization of dynamic light scattering can be obviously enhanced in the information-weighted constrained regularization inversion, but the denoising ability and the peak resolution under noise conditions for information-weighted inversion algorithm are still insufficient. On the basis of information weighting, we added a penalty term with the function of flatness constraints to the objective function of the regularization inversion, and performed the inversion of multiangle dynamic light scattering data, including the simulated data of bimodal distribution particles (466/915 nm, 316/470 nm) and trimodal distribution particles (324/601/871 nm), and the measured data of bimodal distribution particles (306/974 nm, 300/502 nm). The results of the inversion show that multiple-penalty-weighted regularization inversion can not only improve the utilization of the particle size information, but also effectively eliminate the false peaks and burrs in the inversed particle size distributions, and further improve the resolution of peaks in the noise conditions, and then improve the weighting effects of the information-weighted inversion.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5812
Author(s):  
Wentian Wang ◽  
Sixin Liu ◽  
Xuzhang Shen ◽  
Wenjun Zheng

The directional borehole radar can accurately locate and image the geological target around the borehole, which overcomes the shortcomings that the conventional borehole radar can only detect the depth of the target and the distance from the borehole. The directional borehole radar under consideration consists of a transmitting antenna and four receiving antennas equally distributed on the ring in the borehole. The nonuniformity caused by the borehole and sonde, as well as the mutual coupling among the four receiving antennas, will have a serious impact on the received signal and then cause interference to the azimuth recognition for the targets. In this paper, Finite difference time domain (FDTD), including the subgrid, is applied to study these effects and interferences, and the influence of borehole, sonde, and mutual coupling among the receiving antennas is found. The results show that, without considering the sonde and the fluid in the borehole, the one transmitting and one receiving borehole radar system does not have resonance, but the wave pattern of the reflected wave will have obvious distortion. For the four receiving antennas of the borehole radar system, there is obvious resonance, which is caused by the multiple reflections between the receiving antennas. However, when the fluid in the borehole is water and the relative permittivity of the sonde is low to a certain extent, the resonance disappears; that is, the generation of resonance requires a large relative permittivity material between the receiving antennas. When the influence of the sonde is considered, the resonance disappears because the relative permittivity of the sonde is low, which makes the propagation speed of the electromagnetic wave between the antennas accelerate and lose the conditions for resonance. In addition, the diameters of the sonde and the circular array of the receiving antennas can affect the received signal: the higher the diameter of the sonde and the higher the diameter of the circular array are, the better the differentiation of the received signal. The development of the research provides scientific guidance for the design and application of borehole radar in the future.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. Q27-Q36 ◽  
Author(s):  
Lele Zhang ◽  
Jan Thorbecke ◽  
Kees Wapenaar ◽  
Evert Slob

We have developed a scheme that retrieves primary reflections in the two-way traveltime domain by filtering the data. The data have their own filter that removes internal multiple reflections, whereas the amplitudes of the retrieved primary reflections are compensated for two-way transmission losses. Application of the filter does not require any model information. It consists of convolutions and correlations of the data with itself. A truncation in the time domain is applied after each convolution or correlation. The retrieved data set can be used as the input to construct a better velocity model than the one that would be obtained by working directly with the original data and to construct an enhanced subsurface image. Two 2D numerical examples indicate the effectiveness of the method. We have studied bandwidth limitations by analyzing the effects of a thin layer. The presence of refracted and scattered waves is a known limitation of the method, and we studied it as well. Our analysis indicates that a thin layer is treated as a more complicated reflector, and internal multiple reflections related to the thin layer are properly removed. We found that the presence of refracted and scattered waves generates artifacts in the retrieved data.


Geophysics ◽  
1977 ◽  
Vol 42 (4) ◽  
pp. 868-871 ◽  
Author(s):  
Jerry A. Ware

Confirmation that a bright spot zone in question is low velocity can sometimes be made by looking at constant velocity stacks or the common‐depth‐point gathers. When this confirmation does exist, then it is usually possible to do simple ray theory to get a reasonable estimate of the pay thickness, especially if the water‐sand velocity and the gas‐sand velocity are either known or can be predicted for the area. The confirmation referred to can take the form of under‐removal of the primary events or be exhibited by multiple reflections from the bright spot zone. Such under‐removals or multiple reflections will not be seen on the stacked sections but are sometimes obvious on the raw data, such as the common‐depth‐point gathers, or can be implied by looking at constant velocity stacks of the zone in question at different stacking velocities.


Geophysics ◽  
2020 ◽  
pp. 1-48
Author(s):  
Danilo Velis

We propose an automated method for velocity picking that allows to estimate appropriate velocity functions for the normal moveout (NMO) correction of common depth point (CDP) gathers, valid for either hyperbolic or nonhyperbolic trajectories. In the hyperbolic velocity analysis case the process involves the simultaneous search (picking) of a certain number of time-velocity pairs where the semblance, or any other coherence measure, is high. In the nonhyperbolic velocity analysis case, a third parameter, usually associated with the layering and/or the anisotropy, is added to the searching process. The proposed technique relies on a simple but effective search of a piecewise linear curve defined by a certain number of nodes in a 2D or 3D space that follows the semblance maxima. The search is carried out efficiently using a constrained very fast simulated annealing algorithm. The constraints consist of static and dynamic bounding restrictions, which are viewed as a means to incorporate prior information about the picking process. This allows to avoid those maxima that correspond to multiples, spurious, and other meaningless events. Results using synthetic and field data show that the proposed technique permits to automatically obtain accurate and consistent velocity picks that lead to flattened events, in agreement with the manual picks. As an algorithm, the method is very flexible to accommodate additional constraints (e.g. preselected events) and depends on a limited number of parameters. These parameters are easily tuned according to data requirements, available prior information, and the user's needs. The computational costs are relatively low, ranging from a fraction of a second to, at most, 1-2 seconds per CDP gather, using a standard PC with a single processor.


Sign in / Sign up

Export Citation Format

Share Document