Robust elastic nonlinear waveform inversion: Application to real data

Geophysics ◽  
1990 ◽  
Vol 55 (5) ◽  
pp. 527-538 ◽  
Author(s):  
E. Crase ◽  
A. Pica ◽  
M. Noble ◽  
J. McDonald ◽  
A. Tarantola

Nonlinear elastic waveform inversion has advanced to the point where it is now possible to invert real multiple‐shot seismic data. The iterative gradient algorithm that we employ can readily accommodate robust minimization criteria which tend to handle many types of seismic noise (noise bursts, missing traces, etc.) better than the commonly used least‐squares minimization criteria. Although there are many robust criteria from which to choose, we have tested only a few. In particular, the Cauchy criterion and the hyperbolic secant criterion perform very well in both noise‐free and noise‐added inversions of numerical data. Although the real data set, which we invert using the sech criterion, is marine (pressure sources and receivers) and is very much dominated by unconverted P waves, we can, for the most part, resolve the short wavelengths of both P impedance and S impedance. The long wavelengths of velocity (the background) are assumed known. Because we are deriving nearly all impedance information from unconverted P waves in this inversion, data acquisition geometry must have sufficient multiplicity in subsurface coverage and a sufficient range of offsets, just as in amplitude‐versus‐offset (AVO) inversion. However, AVO analysis is implicitly contained in elastic waveform inversion algorithms as part of the elastic wave equation upon which the algorithms are based. Because the real‐data inversion is so large—over 230,000 unknowns (340,000 when density is included) and over 600,000 data values—most statistical analyses of parameter resolution are not feasible. We qualitatively verify the resolution of our results by inverting a numerical data set which has the same acquisition geometry and corresponding long wavelengths of velocity as the real data, but has semirandom perturbations in the short wavelengths of P and S impedance.

Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. H45-H55 ◽  
Author(s):  
Jens S. Buchner ◽  
Ute Wollschläger ◽  
Kurt Roth

A new inversion scheme for common-offset ground-penetrating radar measurements at multiple antenna separations was proposed, which is intermediate between inverting of picked reflectors using ray-tracing and full-waveform inversion. The measurements are modeled similarly to the real data using 2D finite-difference time-domain simulations. These simulations are obtained with a parameterized model of the subsurface that consists of several layers with constant dielectric permittivity and an explicit representation of the layers’ interfaces. Then, reflections in the modeled and in the real data are detected automatically, and the reflections of interest of the real data are selected manually. The sum of squared residuals of the reflections’ traveltime and amplitude is iteratively minimized to estimate subsurface water content and geometry, i.e., the position and shape of the layer interfaces. The method was first tested with a synthetic data set and then applied to a real data set. The comparison of the method’s result with ground-truth data showed an agreement with the subsurface geometry within [Formula: see text] and with the water content, a difference less than [Formula: see text] volume.


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U25-U38 ◽  
Author(s):  
Nuno V. da Silva ◽  
Andrew Ratcliffe ◽  
Vetle Vinje ◽  
Graham Conroy

Parameterization lies at the center of anisotropic full-waveform inversion (FWI) with multiparameter updates. This is because FWI aims to update the long and short wavelengths of the perturbations. Thus, it is important that the parameterization accommodates this. Recently, there has been an intensive effort to determine the optimal parameterization, centering the fundamental discussion mainly on the analysis of radiation patterns for each one of these parameterizations, and aiming to determine which is best suited for multiparameter inversion. We have developed a new parameterization in the scope of FWI, based on the concept of kinematically equivalent media, as originally proposed in other areas of seismic data analysis. Our analysis is also based on radiation patterns, as well as the relation between the perturbation of this set of parameters and perturbation in traveltime. The radiation pattern reveals that this parameterization combines some of the characteristics of parameterizations with one velocity and two Thomsen’s parameters and parameterizations using two velocities and one Thomsen’s parameter. The study of perturbation of traveltime with perturbation of model parameters shows that the new parameterization is less ambiguous when relating these quantities in comparison with other more commonly used parameterizations. We have concluded that our new parameterization is well-suited for inverting diving waves, which are of paramount importance to carry out practical FWI successfully. We have demonstrated that the new parameterization produces good inversion results with synthetic and real data examples. In the latter case of the real data example from the Central North Sea, the inverted models show good agreement with the geologic structures, leading to an improvement of the seismic image and flatness of the common image gathers.


2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


Geophysics ◽  
2013 ◽  
Vol 78 (2) ◽  
pp. G15-G24 ◽  
Author(s):  
Pejman Shamsipour ◽  
Denis Marcotte ◽  
Michel Chouteau ◽  
Martine Rivest ◽  
Abderrezak Bouchedda

The flexibility of geostatistical inversions in geophysics is limited by the use of stationary covariances, which, implicitly and mostly for mathematical convenience, assumes statistical homogeneity of the studied field. For fields showing sharp contrasts due, for example, to faults or folds, an approach based on the use of nonstationary covariances for cokriging inversion was developed. The approach was tested on two synthetic cases and one real data set. Inversion results based on the nonstationary covariance were compared to the results from the stationary covariance for two synthetic models. The nonstationary covariance better recovered the known synthetic models. With the real data set, the nonstationary assumption resulted in a better match with the known surface geology.


2012 ◽  
Vol 82 (9) ◽  
pp. 1615-1629 ◽  
Author(s):  
Bhupendra Singh ◽  
Puneet Kumar Gupta

1994 ◽  
Vol 1 (2/3) ◽  
pp. 182-190 ◽  
Author(s):  
M. Eneva

Abstract. Using finite data sets and limited size of study volumes may result in significant spurious effects when estimating the scaling properties of various physical processes. These effects are examined with an example featuring the spatial distribution of induced seismic activity in Creighton Mine (northern Ontario, Canada). The events studied in the present work occurred during a three-month period, March-May 1992, within a volume of approximate size 400 x 400 x 180 m3. Two sets of microearthquake locations are studied: Data Set 1 (14,338 events) and Data Set 2 (1654 events). Data Set 1 includes the more accurately located events and amounts to about 30 per cent of all recorded data. Data Set 2 represents a portion of the first data set that is formed by the most accurately located and the strongest microearthquakes. The spatial distribution of events in the two data sets is examined for scaling behaviour using the method of generalized correlation integrals featuring various moments q. From these, generalized correlation dimensions are estimated using the slope method. Similar estimates are made for randomly generated point sets using the same numbers of events and the same study volumes as for the real data. Uniform and monofractal random distributions are used for these simulations. In addition, samples from the real data are randomly extracted and the dimension spectra for these are examined as well. The spectra for the uniform and monofractal random generations show spurious multifractality due only to the use of finite numbers of data points and limited size of study volume. Comparing these with the spectra of dimensions for Data Set 1 and Data Set 2 allows us to estimate the bias likely to be present in the estimates for the real data. The strong multifractality suggested by the spectrum for Data Set 2 appears to be largely spurious; the spatial distribution, while different from uniform, could originate from a monofractal process. The spatial distribution of microearthquakes in Data Set 1 is either monofractal as well, or only weakly multifractal. In all similar studies, comparisons of result from real data and simulated point sets may help distinguish between genuine and artificial multifractality, without necessarily resorting to large number of data.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


Testing is very essential in Data warehouse systems for decision making because the accuracy, validation and correctness of data depends on it. By looking to the characteristics and complexity of iData iwarehouse, iin ithis ipaper, iwe ihave itried ito ishow the scope of automated testing in assuring ibest data iwarehouse isolutions. Firstly, we developed a data set generator for creating synthetic but near to real data; then in isynthesized idata, with ithe help of hand icoded Extraction, Transformation and Loading (ETL) routine, anomalies are classified. For the quality assurance of data for a Data warehouse and to give the idea of how important the iExtraction, iTransformation iand iLoading iis, some very important test cases were identified. After that, to ensure the quality of data, the procedures of automated testing iwere iembedded iin ihand icoded iETL iroutine. Statistical analysis was done and it revealed a big enhancement in the quality of data with the procedures of automated testing. It enhances the fact that automated testing gives promising results in the data warehouse quality. For effective and easy maintenance of distributed data,a novel architecture was proposed. Although the desired result of this research is achieved successfully and the objectives are promising, but still there's a need to validate the results with the real life environment, as this research was done in simulated environment, which may not always give the desired results in real life environment. Hence, the overall potential of the proposed architecture can be seen until it is deployed to manage the real data which is distributed globally.


2021 ◽  
Author(s):  
Xi Cheng

Abstract To solve the problem of low accuracy of traditional travel route recommendation algorithm, a travel route recommendation algorithm based on interest theme and distance matching is proposed in this paper. Firstly, the real historical travel footprints of users are obtained through analysis. Then, the user's preferences of interest theme and distance matching are proposed based on the user's stay in each scenic spot. Finally, the optimal travel route calculation method is designed under the given travel time limit, starting point and end point. Experiments on the real data set of the Flickr social network showed that the proposed algorithm has a higher accuracy rate and recall rate, compared with the traditional algorithm that only considers the interest theme and the algorithm which only considers the distance matching


2019 ◽  
Vol 13 (4) ◽  
pp. 375-385
Author(s):  
Saeed Mirzadeh ◽  
Anis Iranmanesh

Abstract In this study, the researchers introduce a new class of the logistic distribution which can be used to model the unimodal data with some skewness present. The new generalization is carried out using the basic idea of Nadarajah (Statistics 48(4):872–895, 2014), called truncated-exponential skew-logistic (TESL) distribution. The TESL distribution is a member of the exponential family; therefore, the skewness parameter can be derived easier. Meanwhile, some important statistical characteristics are presented; the real data set and simulation studies are applied to evaluate the results. Also, the TESL distribution is compared to at least five other skew-logistic distributions.


Sign in / Sign up

Export Citation Format

Share Document