Estimation of primaries by sparse inversion from passive seismic data

Geophysics ◽  
2010 ◽  
Vol 75 (4) ◽  
pp. SA61-SA69 ◽  
Author(s):  
G. J. van Groenestijn ◽  
D. J. Verschuur

For passive seismic data, surface multiples are used to obtain an estimate of the subsurface responses, usually by a crosscorrelation process. This crosscorrelation process relies on the assumption that the surface has been uniformly illuminated by subsurface sources in terms of incident angles and strengths. If this is not the case, the crosscorrelation process cannot give a true amplitude estimation of the subsurface response. Furthermore, cross terms in the crosscorrelation result are not related to actual subsurface inhomogeneities. We have developed a method that can obtain true amplitude subsurface responses without a uniform surface-illumination assumption. Our methodology goes beyond the crosscorrelation process and estimates primaries only from the surface-related multiples in the available signal. We use the recently introduced estimation of primaries by sparse inversion (EPSI) methodology, in which the primary impulse responses are considered to be the unknowns in a large-scale inversion process. With some modifications, the EPSI method can be used for passive seismic data. The output of this process is primary impulse responses with point sources and receivers at the surface, which can be used directly in traditional imaging schemes. The methodology was tested on 2D synthetic data.

Geophysics ◽  
2009 ◽  
Vol 74 (2) ◽  
pp. R1-R14 ◽  
Author(s):  
Wenyi Hu ◽  
Aria Abubakar ◽  
Tarek M. Habashy

We present a simultaneous multifrequency inversion approach for seismic data interpretation. This algorithm inverts all frequency data components simultaneously. A data-weighting scheme balances the contributions from different frequency data components so the inversion process does not become dominated by high-frequency data components, which produce a velocity image with many artifacts. A Gauss-Newton minimization approach achieves a high convergence rate and an accurate reconstructed velocity image. By introducing a modified adjoint formulation, we can calculate the Jacobian matrix efficiently, allowing the material properties in the perfectly matched layers (PMLs) to be updated automatically during the inversion process. This feature ensures the correct behavior of the inversion and implies that the algorithm is appropriate for realistic applications where a priori information of the background medium is unavailable. Two different regularization schemes, an [Formula: see text]-norm and a weighted [Formula: see text]-norm function, are used in this algorithm for smooth profiles and profiles with sharp boundaries, respectively. The regularization parameter is determined automatically and adaptively by the so-called multiplicative regularization technique. To test the algorithm, we implement the inversion to reconstruct the Marmousi velocity model using synthetic data generated by the finite-difference time-domain code. These numerical simulation results indicate that this inversion algorithm is robust in terms of starting model and noise suppression. Under some circumstances, it is more robust than a traditional sequential inversion approach.


Geophysics ◽  
2009 ◽  
Vol 74 (3) ◽  
pp. A23-A28 ◽  
Author(s):  
G. J. van Groenestijn ◽  
D. J. Verschuur

Accurate removal of surface-related multiples remains a challenge in many cases. To overcome typical inaccuracies in current multiple-removal techniques, we have developed a new primary-estimation method: estimation of primaries by sparse inversion (EPSI). EPSI is based on the same primary-multiple model as surface-related multiple elimination (SRME) and also requires no subsurface model. Unlike SRME, EPSI estimates the primaries as unknowns in a multidimensional inversion process rather than in a subtraction process. Furthermore, it does not depend on interpolated missing near-offset data because it can reconstruct missing data simultaneously. Sparseness plays a key role in the new primary-estimation procedure. The method was tested on 2D synthetic data.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 811
Author(s):  
Yaqin Hu ◽  
Yusheng Shi

The concentration of atmospheric carbon dioxide (CO2) has increased rapidly worldwide, aggravating the global greenhouse effect, and coal-fired power plants are one of the biggest contributors of greenhouse gas emissions in China. However, efficient methods that can quantify CO2 emissions from individual coal-fired power plants with high accuracy are needed. In this study, we estimated the CO2 emissions of large-scale coal-fired power plants using Orbiting Carbon Observatory-2 (OCO-2) satellite data based on remote sensing inversions and bottom-up methods. First, we mapped the distribution of coal-fired power plants, displaying the total installed capacity, and identified two appropriate targets, the Waigaoqiao and Qinbei power plants in Shanghai and Henan, respectively. Then, an improved Gaussian plume model method was applied for CO2 emission estimations, with input parameters including the geographic coordinates of point sources, wind vectors from the atmospheric reanalysis of the global climate, and OCO-2 observations. The application of the Gaussian model was improved by using wind data with higher temporal and spatial resolutions, employing the physically based unit conversion method, and interpolating OCO-2 observations into different resolutions. Consequently, CO2 emissions were estimated to be 23.06 ± 2.82 (95% CI) Mt/yr using the Gaussian model and 16.28 Mt/yr using the bottom-up method for the Waigaoqiao Power Plant, and 14.58 ± 3.37 (95% CI) and 14.08 Mt/yr for the Qinbei Power Plant, respectively. These estimates were compared with three standard databases for validation: the Carbon Monitoring for Action database, the China coal-fired Power Plant Emissions Database, and the Carbon Brief database. The comparison found that previous emission inventories spanning different time frames might have overestimated the CO2 emissions of one of two Chinese power plants on the two days that the measurements were made. Our study contributes to quantifying CO2 emissions from point sources and helps in advancing satellite-based monitoring techniques of emission sources in the future; this helps in reducing errors due to human intervention in bottom-up statistical methods.


Universe ◽  
2021 ◽  
Vol 7 (7) ◽  
pp. 220
Author(s):  
Emil Khalikov

The intrinsic spectra of some distant blazars known as “extreme TeV blazars” have shown a hint at an anomalous hardening in the TeV energy region. Several extragalactic propagation models have been proposed to explain this possible excess transparency of the Universe to gamma-rays starting from a model which assumes the existence of so-called axion-like particles (ALPs) and the new process of gamma-ALP oscillations. Alternative models suppose that some of the observable gamma-rays are produced in the intergalactic cascades. This work focuses on investigating the spectral and angular features of one of the cascade models, the Intergalactic Hadronic Cascade Model (IHCM) in the contemporary astrophysical models of Extragalactic Magnetic Field (EGMF). For IHCM, EGMF largely determines the deflection of primary cosmic rays and electrons of intergalactic cascades and, thus, is of vital importance. Contemporary Hackstein models are considered in this paper and compared to the model of Dolag. The models assumed are based on simulations of the local part of large-scale structure of the Universe and differ in the assumptions for the seed field. This work provides spectral energy distributions (SEDs) and angular extensions of two extreme TeV blazars, 1ES 0229+200 and 1ES 0414+009. It is demonstrated that observable SEDs inside a typical point spread function of imaging atmospheric Cherenkov telescopes (IACTs) for IHCM would exhibit a characteristic high-energy attenuation compared to the ones obtained in hadronic models that do not consider EGMF, which makes it possible to distinguish among these models. At the same time, the spectra for IHCM models would have longer high energy tails than some available spectra for the ALP models and the universal spectra for the Electromagnetic Cascade Model (ECM). The analysis of the IHCM observable angular extensions shows that the sources would likely be identified by most IACTs not as point sources but rather as extended ones. These spectra could later be compared with future observation data of such instruments as Cherenkov Telescope Array (CTA) and LHAASO.


2019 ◽  
Vol 500 (1) ◽  
pp. 531-549 ◽  
Author(s):  
Suzanne Bull ◽  
Joseph A. Cartwright

AbstractThis study shows how simple structural restoration of a discrete submarine landslide lobe can be applied to large-scale, multi-phase examples to identify different phases of slide-lobe development and evaluate their mode of emplacement. We present the most detailed analysis performed to date on a zone of intense contractional deformation, historically referred to as the compression zone, from the giant, multi-phase Storegga Slide, offshore Norway. 2D and 3D seismic data and bathymetry data show that the zone of large-scale (>650 m thick) contractional deformation can be genetically linked updip with a zone of intense depletion across a distance of 135 km. Quantification of depletion and accumulation along a representative dip-section reveals that significant depletion in the proximal region is not accommodated in the relatively mild amount (c. 5%) of downdip shortening. Dip-section restoration indicates a later, separate stage of deformation may have involved removal of a significant volume of material as part of the final stages of the Storegga Slide, as opposed to the minor volumes reported in previous studies.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Sign in / Sign up

Export Citation Format

Share Document